00:00:00.001 Started by upstream project "autotest-spdk-v24.09-vs-dpdk-v23.11" build number 123 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3624 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.135 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.136 The recommended git tool is: git 00:00:00.136 using credential 00000000-0000-0000-0000-000000000002 00:00:00.138 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.188 Fetching changes from the remote Git repository 00:00:00.190 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.231 Using shallow fetch with depth 1 00:00:00.231 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.231 > git --version # timeout=10 00:00:00.266 > git --version # 'git version 2.39.2' 00:00:00.266 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.289 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.289 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:09.596 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:09.608 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:09.619 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:09.619 > git config core.sparsecheckout # timeout=10 00:00:09.633 > git read-tree -mu HEAD # timeout=10 00:00:09.647 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:09.663 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:09.663 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:09.764 [Pipeline] Start of Pipeline 00:00:09.775 [Pipeline] library 00:00:09.777 Loading library shm_lib@master 00:00:09.777 Library shm_lib@master is cached. Copying from home. 00:00:09.793 [Pipeline] node 00:00:09.806 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:09.808 [Pipeline] { 00:00:09.819 [Pipeline] catchError 00:00:09.820 [Pipeline] { 00:00:09.835 [Pipeline] wrap 00:00:09.842 [Pipeline] { 00:00:09.851 [Pipeline] stage 00:00:09.853 [Pipeline] { (Prologue) 00:00:09.868 [Pipeline] echo 00:00:09.869 Node: VM-host-SM38 00:00:09.877 [Pipeline] cleanWs 00:00:09.889 [WS-CLEANUP] Deleting project workspace... 00:00:09.889 [WS-CLEANUP] Deferred wipeout is used... 00:00:09.897 [WS-CLEANUP] done 00:00:10.142 [Pipeline] setCustomBuildProperty 00:00:10.242 [Pipeline] httpRequest 00:00:10.581 [Pipeline] echo 00:00:10.583 Sorcerer 10.211.164.101 is alive 00:00:10.592 [Pipeline] retry 00:00:10.595 [Pipeline] { 00:00:10.608 [Pipeline] httpRequest 00:00:10.614 HttpMethod: GET 00:00:10.614 URL: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:10.615 Sending request to url: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:10.633 Response Code: HTTP/1.1 200 OK 00:00:10.634 Success: Status code 200 is in the accepted range: 200,404 00:00:10.634 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:16.096 [Pipeline] } 00:00:16.113 [Pipeline] // retry 00:00:16.121 [Pipeline] sh 00:00:16.407 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:16.425 [Pipeline] httpRequest 00:00:16.842 [Pipeline] echo 00:00:16.843 Sorcerer 10.211.164.101 is alive 00:00:16.851 [Pipeline] retry 00:00:16.853 [Pipeline] { 00:00:16.866 [Pipeline] httpRequest 00:00:16.870 HttpMethod: GET 00:00:16.871 URL: http://10.211.164.101/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:16.871 Sending request to url: http://10.211.164.101/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:16.886 Response Code: HTTP/1.1 200 OK 00:00:16.887 Success: Status code 200 is in the accepted range: 200,404 00:00:16.887 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:01:46.980 [Pipeline] } 00:01:46.997 [Pipeline] // retry 00:01:47.004 [Pipeline] sh 00:01:47.285 + tar --no-same-owner -xf spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:01:50.597 [Pipeline] sh 00:01:50.880 + git -C spdk log --oneline -n5 00:01:50.880 b18e1bd62 version: v24.09.1-pre 00:01:50.880 19524ad45 version: v24.09 00:01:50.880 9756b40a3 dpdk: update submodule to include alarm_cancel fix 00:01:50.880 a808500d2 test/nvmf: disable nvmf_shutdown_tc4 on e810 00:01:50.880 3024272c6 bdev/nvme: take nvme_ctrlr.mutex when setting keys 00:01:50.901 [Pipeline] withCredentials 00:01:50.912 > git --version # timeout=10 00:01:50.925 > git --version # 'git version 2.39.2' 00:01:50.946 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:50.948 [Pipeline] { 00:01:50.957 [Pipeline] retry 00:01:50.959 [Pipeline] { 00:01:50.974 [Pipeline] sh 00:01:51.260 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:01:51.272 [Pipeline] } 00:01:51.287 [Pipeline] // retry 00:01:51.293 [Pipeline] } 00:01:51.311 [Pipeline] // withCredentials 00:01:51.320 [Pipeline] httpRequest 00:01:51.724 [Pipeline] echo 00:01:51.725 Sorcerer 10.211.164.101 is alive 00:01:51.734 [Pipeline] retry 00:01:51.735 [Pipeline] { 00:01:51.749 [Pipeline] httpRequest 00:01:51.755 HttpMethod: GET 00:01:51.755 URL: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:51.756 Sending request to url: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:51.766 Response Code: HTTP/1.1 200 OK 00:01:51.767 Success: Status code 200 is in the accepted range: 200,404 00:01:51.767 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:02:06.723 [Pipeline] } 00:02:06.740 [Pipeline] // retry 00:02:06.747 [Pipeline] sh 00:02:07.031 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:02:08.984 [Pipeline] sh 00:02:09.267 + git -C dpdk log --oneline -n5 00:02:09.267 eeb0605f11 version: 23.11.0 00:02:09.267 238778122a doc: update release notes for 23.11 00:02:09.267 46aa6b3cfc doc: fix description of RSS features 00:02:09.267 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:02:09.267 7e421ae345 devtools: support skipping forbid rule check 00:02:09.286 [Pipeline] writeFile 00:02:09.300 [Pipeline] sh 00:02:09.587 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:02:09.600 [Pipeline] sh 00:02:09.883 + cat autorun-spdk.conf 00:02:09.883 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:09.883 SPDK_TEST_NVME=1 00:02:09.883 SPDK_TEST_FTL=1 00:02:09.883 SPDK_TEST_ISAL=1 00:02:09.883 SPDK_RUN_ASAN=1 00:02:09.883 SPDK_RUN_UBSAN=1 00:02:09.883 SPDK_TEST_XNVME=1 00:02:09.883 SPDK_TEST_NVME_FDP=1 00:02:09.883 SPDK_TEST_NATIVE_DPDK=v23.11 00:02:09.883 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:09.883 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:09.891 RUN_NIGHTLY=1 00:02:09.893 [Pipeline] } 00:02:09.907 [Pipeline] // stage 00:02:09.922 [Pipeline] stage 00:02:09.924 [Pipeline] { (Run VM) 00:02:09.936 [Pipeline] sh 00:02:10.220 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:02:10.220 + echo 'Start stage prepare_nvme.sh' 00:02:10.220 Start stage prepare_nvme.sh 00:02:10.220 + [[ -n 1 ]] 00:02:10.220 + disk_prefix=ex1 00:02:10.220 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:02:10.220 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:02:10.220 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:02:10.220 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:10.220 ++ SPDK_TEST_NVME=1 00:02:10.220 ++ SPDK_TEST_FTL=1 00:02:10.220 ++ SPDK_TEST_ISAL=1 00:02:10.220 ++ SPDK_RUN_ASAN=1 00:02:10.220 ++ SPDK_RUN_UBSAN=1 00:02:10.220 ++ SPDK_TEST_XNVME=1 00:02:10.220 ++ SPDK_TEST_NVME_FDP=1 00:02:10.220 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:10.220 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:10.220 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:10.220 ++ RUN_NIGHTLY=1 00:02:10.220 + cd /var/jenkins/workspace/nvme-vg-autotest 00:02:10.220 + nvme_files=() 00:02:10.220 + declare -A nvme_files 00:02:10.220 + backend_dir=/var/lib/libvirt/images/backends 00:02:10.220 + nvme_files['nvme.img']=5G 00:02:10.220 + nvme_files['nvme-cmb.img']=5G 00:02:10.220 + nvme_files['nvme-multi0.img']=4G 00:02:10.220 + nvme_files['nvme-multi1.img']=4G 00:02:10.220 + nvme_files['nvme-multi2.img']=4G 00:02:10.220 + nvme_files['nvme-openstack.img']=8G 00:02:10.220 + nvme_files['nvme-zns.img']=5G 00:02:10.220 + (( SPDK_TEST_NVME_PMR == 1 )) 00:02:10.220 + (( SPDK_TEST_FTL == 1 )) 00:02:10.220 + nvme_files["nvme-ftl.img"]=6G 00:02:10.220 + (( SPDK_TEST_NVME_FDP == 1 )) 00:02:10.220 + nvme_files["nvme-fdp.img"]=1G 00:02:10.221 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:02:10.221 + for nvme in "${!nvme_files[@]}" 00:02:10.221 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:02:10.221 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:02:10.221 + for nvme in "${!nvme_files[@]}" 00:02:10.221 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-ftl.img -s 6G 00:02:11.163 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:02:11.163 + for nvme in "${!nvme_files[@]}" 00:02:11.163 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:02:11.163 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:02:11.163 + for nvme in "${!nvme_files[@]}" 00:02:11.163 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:02:11.163 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:02:11.163 + for nvme in "${!nvme_files[@]}" 00:02:11.163 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:02:11.163 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:02:11.163 + for nvme in "${!nvme_files[@]}" 00:02:11.163 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:02:11.163 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:02:11.163 + for nvme in "${!nvme_files[@]}" 00:02:11.163 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:02:11.163 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:02:11.163 + for nvme in "${!nvme_files[@]}" 00:02:11.163 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-fdp.img -s 1G 00:02:11.425 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:02:11.425 + for nvme in "${!nvme_files[@]}" 00:02:11.425 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:02:11.425 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:02:11.425 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:02:11.425 + echo 'End stage prepare_nvme.sh' 00:02:11.425 End stage prepare_nvme.sh 00:02:11.438 [Pipeline] sh 00:02:11.723 + DISTRO=fedora39 00:02:11.723 + CPUS=10 00:02:11.723 + RAM=12288 00:02:11.723 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:02:11.723 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex1-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex1-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:02:11.723 00:02:11.723 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:02:11.723 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:02:11.723 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:02:11.723 HELP=0 00:02:11.723 DRY_RUN=0 00:02:11.723 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme-ftl.img,/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,/var/lib/libvirt/images/backends/ex1-nvme-fdp.img, 00:02:11.723 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:02:11.723 NVME_AUTO_CREATE=0 00:02:11.723 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,, 00:02:11.723 NVME_CMB=,,,, 00:02:11.723 NVME_PMR=,,,, 00:02:11.723 NVME_ZNS=,,,, 00:02:11.723 NVME_MS=true,,,, 00:02:11.723 NVME_FDP=,,,on, 00:02:11.723 SPDK_VAGRANT_DISTRO=fedora39 00:02:11.723 SPDK_VAGRANT_VMCPU=10 00:02:11.723 SPDK_VAGRANT_VMRAM=12288 00:02:11.723 SPDK_VAGRANT_PROVIDER=libvirt 00:02:11.723 SPDK_VAGRANT_HTTP_PROXY= 00:02:11.723 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:02:11.723 SPDK_OPENSTACK_NETWORK=0 00:02:11.723 VAGRANT_PACKAGE_BOX=0 00:02:11.723 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:02:11.723 FORCE_DISTRO=true 00:02:11.723 VAGRANT_BOX_VERSION= 00:02:11.723 EXTRA_VAGRANTFILES= 00:02:11.723 NIC_MODEL=e1000 00:02:11.723 00:02:11.723 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:02:11.723 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:02:14.272 Bringing machine 'default' up with 'libvirt' provider... 00:02:14.845 ==> default: Creating image (snapshot of base box volume). 00:02:15.106 ==> default: Creating domain with the following settings... 00:02:15.106 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1731215047_32dea0c2621e785a13b9 00:02:15.106 ==> default: -- Domain type: kvm 00:02:15.106 ==> default: -- Cpus: 10 00:02:15.106 ==> default: -- Feature: acpi 00:02:15.106 ==> default: -- Feature: apic 00:02:15.106 ==> default: -- Feature: pae 00:02:15.106 ==> default: -- Memory: 12288M 00:02:15.106 ==> default: -- Memory Backing: hugepages: 00:02:15.106 ==> default: -- Management MAC: 00:02:15.106 ==> default: -- Loader: 00:02:15.106 ==> default: -- Nvram: 00:02:15.106 ==> default: -- Base box: spdk/fedora39 00:02:15.106 ==> default: -- Storage pool: default 00:02:15.106 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1731215047_32dea0c2621e785a13b9.img (20G) 00:02:15.106 ==> default: -- Volume Cache: default 00:02:15.106 ==> default: -- Kernel: 00:02:15.106 ==> default: -- Initrd: 00:02:15.106 ==> default: -- Graphics Type: vnc 00:02:15.106 ==> default: -- Graphics Port: -1 00:02:15.106 ==> default: -- Graphics IP: 127.0.0.1 00:02:15.106 ==> default: -- Graphics Password: Not defined 00:02:15.106 ==> default: -- Video Type: cirrus 00:02:15.106 ==> default: -- Video VRAM: 9216 00:02:15.106 ==> default: -- Sound Type: 00:02:15.107 ==> default: -- Keymap: en-us 00:02:15.107 ==> default: -- TPM Path: 00:02:15.107 ==> default: -- INPUT: type=mouse, bus=ps2 00:02:15.107 ==> default: -- Command line args: 00:02:15.107 ==> default: -> value=-device, 00:02:15.107 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:02:15.107 ==> default: -> value=-drive, 00:02:15.107 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:02:15.107 ==> default: -> value=-device, 00:02:15.107 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:02:15.107 ==> default: -> value=-device, 00:02:15.107 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:02:15.107 ==> default: -> value=-drive, 00:02:15.107 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-1-drive0, 00:02:15.107 ==> default: -> value=-device, 00:02:15.107 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:15.107 ==> default: -> value=-device, 00:02:15.107 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:02:15.107 ==> default: -> value=-drive, 00:02:15.107 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:02:15.107 ==> default: -> value=-device, 00:02:15.107 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:15.107 ==> default: -> value=-drive, 00:02:15.107 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:02:15.107 ==> default: -> value=-device, 00:02:15.107 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:15.107 ==> default: -> value=-drive, 00:02:15.107 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:02:15.107 ==> default: -> value=-device, 00:02:15.107 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:15.107 ==> default: -> value=-device, 00:02:15.107 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:02:15.107 ==> default: -> value=-device, 00:02:15.107 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:02:15.107 ==> default: -> value=-drive, 00:02:15.107 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:02:15.107 ==> default: -> value=-device, 00:02:15.107 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:15.368 ==> default: Creating shared folders metadata... 00:02:15.368 ==> default: Starting domain. 00:02:17.297 ==> default: Waiting for domain to get an IP address... 00:02:32.207 ==> default: Waiting for SSH to become available... 00:02:33.157 ==> default: Configuring and enabling network interfaces... 00:02:37.354 default: SSH address: 192.168.121.152:22 00:02:37.354 default: SSH username: vagrant 00:02:37.354 default: SSH auth method: private key 00:02:39.257 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:44.523 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:48.710 ==> default: Mounting SSHFS shared folder... 00:02:50.629 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:50.629 ==> default: Checking Mount.. 00:02:51.196 ==> default: Folder Successfully Mounted! 00:02:51.454 00:02:51.454 SUCCESS! 00:02:51.454 00:02:51.454 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:51.454 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:51.454 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:51.454 00:02:51.462 [Pipeline] } 00:02:51.477 [Pipeline] // stage 00:02:51.486 [Pipeline] dir 00:02:51.487 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:02:51.488 [Pipeline] { 00:02:51.501 [Pipeline] catchError 00:02:51.503 [Pipeline] { 00:02:51.515 [Pipeline] sh 00:02:51.794 + vagrant ssh-config --host vagrant 00:02:51.794 + sed -ne '/^Host/,$p' 00:02:51.794 + tee ssh_conf 00:02:54.327 Host vagrant 00:02:54.327 HostName 192.168.121.152 00:02:54.327 User vagrant 00:02:54.327 Port 22 00:02:54.327 UserKnownHostsFile /dev/null 00:02:54.327 StrictHostKeyChecking no 00:02:54.327 PasswordAuthentication no 00:02:54.327 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:54.327 IdentitiesOnly yes 00:02:54.327 LogLevel FATAL 00:02:54.327 ForwardAgent yes 00:02:54.327 ForwardX11 yes 00:02:54.327 00:02:54.339 [Pipeline] withEnv 00:02:54.341 [Pipeline] { 00:02:54.354 [Pipeline] sh 00:02:54.696 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:02:54.696 source /etc/os-release 00:02:54.696 [[ -e /image.version ]] && img=$(< /image.version) 00:02:54.696 # Minimal, systemd-like check. 00:02:54.696 if [[ -e /.dockerenv ]]; then 00:02:54.696 # Clear garbage from the node'\''s name: 00:02:54.696 # agt-er_autotest_547-896 -> autotest_547-896 00:02:54.696 # $HOSTNAME is the actual container id 00:02:54.696 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:54.696 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:54.696 # We can assume this is a mount from a host where container is running, 00:02:54.696 # so fetch its hostname to easily identify the target swarm worker. 00:02:54.696 container="$(< /etc/hostname) ($agent)" 00:02:54.696 else 00:02:54.696 # Fallback 00:02:54.696 container=$agent 00:02:54.696 fi 00:02:54.696 fi 00:02:54.696 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:54.696 ' 00:02:54.964 [Pipeline] } 00:02:54.980 [Pipeline] // withEnv 00:02:54.988 [Pipeline] setCustomBuildProperty 00:02:55.002 [Pipeline] stage 00:02:55.005 [Pipeline] { (Tests) 00:02:55.021 [Pipeline] sh 00:02:55.299 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:55.569 [Pipeline] sh 00:02:55.846 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:56.118 [Pipeline] timeout 00:02:56.118 Timeout set to expire in 50 min 00:02:56.120 [Pipeline] { 00:02:56.135 [Pipeline] sh 00:02:56.418 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:02:56.986 HEAD is now at b18e1bd62 version: v24.09.1-pre 00:02:56.997 [Pipeline] sh 00:02:57.273 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:02:57.546 [Pipeline] sh 00:02:57.827 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:58.106 [Pipeline] sh 00:02:58.388 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:02:58.647 ++ readlink -f spdk_repo 00:02:58.647 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:58.647 + [[ -n /home/vagrant/spdk_repo ]] 00:02:58.647 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:58.647 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:58.647 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:58.647 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:58.647 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:58.647 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:58.647 + cd /home/vagrant/spdk_repo 00:02:58.647 + source /etc/os-release 00:02:58.647 ++ NAME='Fedora Linux' 00:02:58.647 ++ VERSION='39 (Cloud Edition)' 00:02:58.647 ++ ID=fedora 00:02:58.647 ++ VERSION_ID=39 00:02:58.647 ++ VERSION_CODENAME= 00:02:58.647 ++ PLATFORM_ID=platform:f39 00:02:58.648 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:58.648 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:58.648 ++ LOGO=fedora-logo-icon 00:02:58.648 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:58.648 ++ HOME_URL=https://fedoraproject.org/ 00:02:58.648 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:58.648 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:58.648 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:58.648 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:58.648 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:58.648 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:58.648 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:58.648 ++ SUPPORT_END=2024-11-12 00:02:58.648 ++ VARIANT='Cloud Edition' 00:02:58.648 ++ VARIANT_ID=cloud 00:02:58.648 + uname -a 00:02:58.648 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:58.648 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:58.908 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:59.167 Hugepages 00:02:59.167 node hugesize free / total 00:02:59.167 node0 1048576kB 0 / 0 00:02:59.167 node0 2048kB 0 / 0 00:02:59.167 00:02:59.167 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:59.167 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:59.167 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:59.167 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:59.167 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:02:59.167 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:59.167 + rm -f /tmp/spdk-ld-path 00:02:59.167 + source autorun-spdk.conf 00:02:59.167 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:59.167 ++ SPDK_TEST_NVME=1 00:02:59.167 ++ SPDK_TEST_FTL=1 00:02:59.167 ++ SPDK_TEST_ISAL=1 00:02:59.167 ++ SPDK_RUN_ASAN=1 00:02:59.167 ++ SPDK_RUN_UBSAN=1 00:02:59.167 ++ SPDK_TEST_XNVME=1 00:02:59.167 ++ SPDK_TEST_NVME_FDP=1 00:02:59.167 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:59.167 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:59.167 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:59.167 ++ RUN_NIGHTLY=1 00:02:59.167 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:59.167 + [[ -n '' ]] 00:02:59.167 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:59.425 + for M in /var/spdk/build-*-manifest.txt 00:02:59.425 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:59.425 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:59.425 + for M in /var/spdk/build-*-manifest.txt 00:02:59.425 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:59.425 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:59.425 + for M in /var/spdk/build-*-manifest.txt 00:02:59.425 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:59.425 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:59.425 ++ uname 00:02:59.425 + [[ Linux == \L\i\n\u\x ]] 00:02:59.425 + sudo dmesg -T 00:02:59.425 + sudo dmesg --clear 00:02:59.425 + dmesg_pid=5763 00:02:59.425 + [[ Fedora Linux == FreeBSD ]] 00:02:59.425 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:59.425 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:59.425 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:59.425 + [[ -x /usr/src/fio-static/fio ]] 00:02:59.425 + sudo dmesg -Tw 00:02:59.425 + export FIO_BIN=/usr/src/fio-static/fio 00:02:59.425 + FIO_BIN=/usr/src/fio-static/fio 00:02:59.425 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:59.425 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:59.425 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:59.425 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:59.425 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:59.425 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:59.425 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:59.425 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:59.425 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:59.425 Test configuration: 00:02:59.425 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:59.425 SPDK_TEST_NVME=1 00:02:59.425 SPDK_TEST_FTL=1 00:02:59.425 SPDK_TEST_ISAL=1 00:02:59.425 SPDK_RUN_ASAN=1 00:02:59.425 SPDK_RUN_UBSAN=1 00:02:59.425 SPDK_TEST_XNVME=1 00:02:59.425 SPDK_TEST_NVME_FDP=1 00:02:59.425 SPDK_TEST_NATIVE_DPDK=v23.11 00:02:59.425 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:59.425 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:59.425 RUN_NIGHTLY=1 05:04:52 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:02:59.425 05:04:52 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:59.425 05:04:52 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:59.425 05:04:52 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:59.425 05:04:52 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:59.425 05:04:52 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:59.425 05:04:52 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:59.425 05:04:52 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:59.425 05:04:52 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:59.425 05:04:52 -- paths/export.sh@5 -- $ export PATH 00:02:59.425 05:04:52 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:59.425 05:04:52 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:59.425 05:04:52 -- common/autobuild_common.sh@479 -- $ date +%s 00:02:59.426 05:04:52 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1731215092.XXXXXX 00:02:59.426 05:04:52 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1731215092.w3ihR9 00:02:59.426 05:04:52 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:02:59.426 05:04:52 -- common/autobuild_common.sh@485 -- $ '[' -n v23.11 ']' 00:02:59.426 05:04:52 -- common/autobuild_common.sh@486 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:59.426 05:04:52 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:59.426 05:04:52 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:59.426 05:04:52 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:59.426 05:04:52 -- common/autobuild_common.sh@495 -- $ get_config_params 00:02:59.426 05:04:52 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:02:59.426 05:04:52 -- common/autotest_common.sh@10 -- $ set +x 00:02:59.426 05:04:52 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:02:59.426 05:04:52 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:02:59.426 05:04:52 -- pm/common@17 -- $ local monitor 00:02:59.426 05:04:52 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:59.426 05:04:52 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:59.426 05:04:52 -- pm/common@25 -- $ sleep 1 00:02:59.426 05:04:52 -- pm/common@21 -- $ date +%s 00:02:59.426 05:04:52 -- pm/common@21 -- $ date +%s 00:02:59.426 05:04:52 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731215092 00:02:59.426 05:04:52 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731215092 00:02:59.684 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731215092_collect-cpu-load.pm.log 00:02:59.684 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731215092_collect-vmstat.pm.log 00:03:00.666 05:04:53 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:03:00.666 05:04:53 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:00.666 05:04:53 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:00.666 05:04:53 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:00.666 05:04:53 -- spdk/autobuild.sh@16 -- $ date -u 00:03:00.666 Sun Nov 10 05:04:53 AM UTC 2024 00:03:00.666 05:04:53 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:00.666 v24.09-rc1-9-gb18e1bd62 00:03:00.666 05:04:53 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:03:00.666 05:04:53 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:03:00.666 05:04:53 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:00.666 05:04:53 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:00.666 05:04:53 -- common/autotest_common.sh@10 -- $ set +x 00:03:00.666 ************************************ 00:03:00.666 START TEST asan 00:03:00.666 ************************************ 00:03:00.666 using asan 00:03:00.666 ************************************ 00:03:00.666 END TEST asan 00:03:00.666 ************************************ 00:03:00.666 05:04:53 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:03:00.666 00:03:00.666 real 0m0.000s 00:03:00.666 user 0m0.000s 00:03:00.666 sys 0m0.000s 00:03:00.666 05:04:53 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:00.666 05:04:53 asan -- common/autotest_common.sh@10 -- $ set +x 00:03:00.666 05:04:53 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:00.666 05:04:53 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:00.666 05:04:53 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:00.666 05:04:53 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:00.666 05:04:53 -- common/autotest_common.sh@10 -- $ set +x 00:03:00.666 ************************************ 00:03:00.666 START TEST ubsan 00:03:00.666 ************************************ 00:03:00.666 using ubsan 00:03:00.666 05:04:53 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:03:00.666 00:03:00.666 real 0m0.000s 00:03:00.666 user 0m0.000s 00:03:00.666 sys 0m0.000s 00:03:00.666 ************************************ 00:03:00.666 END TEST ubsan 00:03:00.666 ************************************ 00:03:00.666 05:04:53 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:00.666 05:04:53 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:03:00.666 05:04:53 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:03:00.666 05:04:53 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:03:00.666 05:04:53 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:03:00.666 05:04:53 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:03:00.666 05:04:53 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:00.666 05:04:53 -- common/autotest_common.sh@10 -- $ set +x 00:03:00.666 ************************************ 00:03:00.666 START TEST build_native_dpdk 00:03:00.666 ************************************ 00:03:00.666 05:04:53 build_native_dpdk -- common/autotest_common.sh@1125 -- $ _build_native_dpdk 00:03:00.666 05:04:53 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:03:00.666 05:04:53 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:03:00.666 05:04:53 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:03:00.666 05:04:53 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:03:00.666 05:04:53 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:03:00.666 05:04:53 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:03:00.666 05:04:53 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:03:00.666 05:04:53 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:03:00.666 05:04:53 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:03:00.666 05:04:53 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:03:00.666 05:04:53 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:03:00.666 05:04:53 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:03:00.666 05:04:53 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:03:00.666 05:04:53 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:03:00.666 05:04:53 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:03:00.666 05:04:53 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:03:00.666 05:04:53 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:03:00.666 05:04:53 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:03:00.666 05:04:53 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:03:00.666 05:04:53 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:03:00.666 eeb0605f11 version: 23.11.0 00:03:00.666 238778122a doc: update release notes for 23.11 00:03:00.666 46aa6b3cfc doc: fix description of RSS features 00:03:00.666 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:03:00.666 7e421ae345 devtools: support skipping forbid rule check 00:03:00.666 05:04:53 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:03:00.666 05:04:53 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:03:00.666 05:04:53 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:03:00.666 05:04:53 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:03:00.666 05:04:53 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:03:00.666 05:04:53 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:03:00.666 05:04:53 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:03:00.666 05:04:53 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:03:00.666 05:04:53 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:03:00.667 05:04:53 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:03:00.667 05:04:53 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:03:00.667 05:04:53 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:03:00.667 05:04:53 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:03:00.667 05:04:53 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:03:00.667 05:04:53 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:03:00.667 05:04:53 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:03:00.667 05:04:53 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:03:00.667 05:04:53 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:03:00.667 05:04:53 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:03:00.667 05:04:53 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:03:00.667 05:04:53 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:03:00.667 05:04:53 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:03:00.667 05:04:53 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:03:00.667 05:04:53 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:03:00.667 05:04:53 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:03:00.667 05:04:53 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:03:00.667 05:04:53 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:03:00.667 05:04:53 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:03:00.667 05:04:53 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:03:00.667 05:04:53 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:03:00.667 05:04:53 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:03:00.667 05:04:53 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:03:00.667 05:04:53 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:00.667 05:04:53 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:03:00.667 05:04:53 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:03:00.667 05:04:53 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:03:00.667 05:04:53 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:03:00.667 05:04:53 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:03:00.667 05:04:53 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:03:00.667 05:04:53 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:03:00.667 05:04:53 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:03:00.667 05:04:53 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:03:00.667 05:04:53 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:03:00.667 05:04:53 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:03:00.667 05:04:53 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:03:00.667 05:04:53 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:03:00.667 patching file config/rte_config.h 00:03:00.667 Hunk #1 succeeded at 60 (offset 1 line). 00:03:00.667 05:04:53 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 23.11.0 24.07.0 00:03:00.667 05:04:53 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:03:00.667 05:04:53 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:03:00.667 05:04:53 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:03:00.667 05:04:53 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:03:00.667 05:04:53 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:03:00.667 05:04:53 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:03:00.667 05:04:53 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:03:00.667 05:04:53 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:03:00.667 05:04:53 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:03:00.667 05:04:53 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:03:00.667 05:04:53 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:03:00.667 05:04:53 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:03:00.667 05:04:53 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:03:00.667 05:04:53 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:03:00.667 05:04:53 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:00.667 05:04:53 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:03:00.667 05:04:53 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:03:00.667 05:04:53 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:03:00.667 05:04:53 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:03:00.667 05:04:53 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:03:00.667 05:04:53 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:03:00.667 05:04:53 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:03:00.667 05:04:53 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:03:00.667 05:04:53 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:03:00.667 05:04:53 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:03:00.667 05:04:53 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:03:00.667 05:04:53 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:03:00.667 05:04:53 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:03:00.667 05:04:53 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:03:00.667 patching file lib/pcapng/rte_pcapng.c 00:03:00.667 05:04:53 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 23.11.0 24.07.0 00:03:00.667 05:04:53 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 23.11.0 '>=' 24.07.0 00:03:00.667 05:04:53 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:03:00.667 05:04:53 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:03:00.667 05:04:53 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:03:00.667 05:04:53 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:03:00.667 05:04:53 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:03:00.667 05:04:53 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:03:00.667 05:04:53 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:03:00.667 05:04:53 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:03:00.667 05:04:53 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:03:00.667 05:04:53 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:03:00.667 05:04:53 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:03:00.667 05:04:53 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:03:00.667 05:04:53 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:03:00.667 05:04:53 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:00.667 05:04:53 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:03:00.667 05:04:53 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:03:00.667 05:04:53 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:03:00.667 05:04:53 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:03:00.667 05:04:53 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:03:00.667 05:04:53 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:03:00.667 05:04:53 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:03:00.667 05:04:53 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:03:00.667 05:04:53 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:03:00.667 05:04:53 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:03:00.667 05:04:53 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:03:00.667 05:04:53 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:03:00.667 05:04:53 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:03:00.667 05:04:53 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:03:00.667 05:04:53 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:03:00.667 05:04:53 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:03:00.929 05:04:53 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:03:00.929 05:04:53 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:03:05.137 The Meson build system 00:03:05.137 Version: 1.5.0 00:03:05.137 Source dir: /home/vagrant/spdk_repo/dpdk 00:03:05.137 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:03:05.137 Build type: native build 00:03:05.137 Program cat found: YES (/usr/bin/cat) 00:03:05.137 Project name: DPDK 00:03:05.137 Project version: 23.11.0 00:03:05.137 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:05.137 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:05.137 Host machine cpu family: x86_64 00:03:05.137 Host machine cpu: x86_64 00:03:05.137 Message: ## Building in Developer Mode ## 00:03:05.137 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:05.137 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:03:05.137 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:03:05.137 Program python3 found: YES (/usr/bin/python3) 00:03:05.137 Program cat found: YES (/usr/bin/cat) 00:03:05.137 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:03:05.137 Compiler for C supports arguments -march=native: YES 00:03:05.137 Checking for size of "void *" : 8 00:03:05.138 Checking for size of "void *" : 8 (cached) 00:03:05.138 Library m found: YES 00:03:05.138 Library numa found: YES 00:03:05.138 Has header "numaif.h" : YES 00:03:05.138 Library fdt found: NO 00:03:05.138 Library execinfo found: NO 00:03:05.138 Has header "execinfo.h" : YES 00:03:05.138 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:05.138 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:05.138 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:05.138 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:05.138 Run-time dependency openssl found: YES 3.1.1 00:03:05.138 Run-time dependency libpcap found: YES 1.10.4 00:03:05.138 Has header "pcap.h" with dependency libpcap: YES 00:03:05.138 Compiler for C supports arguments -Wcast-qual: YES 00:03:05.138 Compiler for C supports arguments -Wdeprecated: YES 00:03:05.138 Compiler for C supports arguments -Wformat: YES 00:03:05.138 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:05.138 Compiler for C supports arguments -Wformat-security: NO 00:03:05.138 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:05.138 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:05.138 Compiler for C supports arguments -Wnested-externs: YES 00:03:05.138 Compiler for C supports arguments -Wold-style-definition: YES 00:03:05.138 Compiler for C supports arguments -Wpointer-arith: YES 00:03:05.138 Compiler for C supports arguments -Wsign-compare: YES 00:03:05.138 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:05.138 Compiler for C supports arguments -Wundef: YES 00:03:05.138 Compiler for C supports arguments -Wwrite-strings: YES 00:03:05.138 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:05.138 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:05.138 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:05.138 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:05.138 Program objdump found: YES (/usr/bin/objdump) 00:03:05.138 Compiler for C supports arguments -mavx512f: YES 00:03:05.138 Checking if "AVX512 checking" compiles: YES 00:03:05.138 Fetching value of define "__SSE4_2__" : 1 00:03:05.138 Fetching value of define "__AES__" : 1 00:03:05.138 Fetching value of define "__AVX__" : 1 00:03:05.138 Fetching value of define "__AVX2__" : 1 00:03:05.138 Fetching value of define "__AVX512BW__" : 1 00:03:05.138 Fetching value of define "__AVX512CD__" : 1 00:03:05.138 Fetching value of define "__AVX512DQ__" : 1 00:03:05.138 Fetching value of define "__AVX512F__" : 1 00:03:05.138 Fetching value of define "__AVX512VL__" : 1 00:03:05.138 Fetching value of define "__PCLMUL__" : 1 00:03:05.138 Fetching value of define "__RDRND__" : 1 00:03:05.138 Fetching value of define "__RDSEED__" : 1 00:03:05.138 Fetching value of define "__VPCLMULQDQ__" : 1 00:03:05.138 Fetching value of define "__znver1__" : (undefined) 00:03:05.138 Fetching value of define "__znver2__" : (undefined) 00:03:05.138 Fetching value of define "__znver3__" : (undefined) 00:03:05.138 Fetching value of define "__znver4__" : (undefined) 00:03:05.138 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:05.138 Message: lib/log: Defining dependency "log" 00:03:05.138 Message: lib/kvargs: Defining dependency "kvargs" 00:03:05.138 Message: lib/telemetry: Defining dependency "telemetry" 00:03:05.138 Checking for function "getentropy" : NO 00:03:05.138 Message: lib/eal: Defining dependency "eal" 00:03:05.138 Message: lib/ring: Defining dependency "ring" 00:03:05.138 Message: lib/rcu: Defining dependency "rcu" 00:03:05.138 Message: lib/mempool: Defining dependency "mempool" 00:03:05.138 Message: lib/mbuf: Defining dependency "mbuf" 00:03:05.138 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:05.138 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:05.138 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:05.138 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:05.138 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:05.138 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:03:05.138 Compiler for C supports arguments -mpclmul: YES 00:03:05.138 Compiler for C supports arguments -maes: YES 00:03:05.138 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:05.138 Compiler for C supports arguments -mavx512bw: YES 00:03:05.138 Compiler for C supports arguments -mavx512dq: YES 00:03:05.138 Compiler for C supports arguments -mavx512vl: YES 00:03:05.138 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:05.138 Compiler for C supports arguments -mavx2: YES 00:03:05.138 Compiler for C supports arguments -mavx: YES 00:03:05.138 Message: lib/net: Defining dependency "net" 00:03:05.138 Message: lib/meter: Defining dependency "meter" 00:03:05.138 Message: lib/ethdev: Defining dependency "ethdev" 00:03:05.138 Message: lib/pci: Defining dependency "pci" 00:03:05.138 Message: lib/cmdline: Defining dependency "cmdline" 00:03:05.138 Message: lib/metrics: Defining dependency "metrics" 00:03:05.138 Message: lib/hash: Defining dependency "hash" 00:03:05.138 Message: lib/timer: Defining dependency "timer" 00:03:05.138 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:05.138 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:05.138 Fetching value of define "__AVX512CD__" : 1 (cached) 00:03:05.138 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:05.138 Message: lib/acl: Defining dependency "acl" 00:03:05.138 Message: lib/bbdev: Defining dependency "bbdev" 00:03:05.138 Message: lib/bitratestats: Defining dependency "bitratestats" 00:03:05.138 Run-time dependency libelf found: YES 0.191 00:03:05.138 Message: lib/bpf: Defining dependency "bpf" 00:03:05.138 Message: lib/cfgfile: Defining dependency "cfgfile" 00:03:05.138 Message: lib/compressdev: Defining dependency "compressdev" 00:03:05.138 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:05.138 Message: lib/distributor: Defining dependency "distributor" 00:03:05.138 Message: lib/dmadev: Defining dependency "dmadev" 00:03:05.138 Message: lib/efd: Defining dependency "efd" 00:03:05.138 Message: lib/eventdev: Defining dependency "eventdev" 00:03:05.138 Message: lib/dispatcher: Defining dependency "dispatcher" 00:03:05.138 Message: lib/gpudev: Defining dependency "gpudev" 00:03:05.138 Message: lib/gro: Defining dependency "gro" 00:03:05.138 Message: lib/gso: Defining dependency "gso" 00:03:05.138 Message: lib/ip_frag: Defining dependency "ip_frag" 00:03:05.138 Message: lib/jobstats: Defining dependency "jobstats" 00:03:05.138 Message: lib/latencystats: Defining dependency "latencystats" 00:03:05.138 Message: lib/lpm: Defining dependency "lpm" 00:03:05.138 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:05.138 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:05.138 Fetching value of define "__AVX512IFMA__" : 1 00:03:05.138 Message: lib/member: Defining dependency "member" 00:03:05.138 Message: lib/pcapng: Defining dependency "pcapng" 00:03:05.138 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:05.138 Message: lib/power: Defining dependency "power" 00:03:05.138 Message: lib/rawdev: Defining dependency "rawdev" 00:03:05.138 Message: lib/regexdev: Defining dependency "regexdev" 00:03:05.138 Message: lib/mldev: Defining dependency "mldev" 00:03:05.138 Message: lib/rib: Defining dependency "rib" 00:03:05.138 Message: lib/reorder: Defining dependency "reorder" 00:03:05.138 Message: lib/sched: Defining dependency "sched" 00:03:05.138 Message: lib/security: Defining dependency "security" 00:03:05.138 Message: lib/stack: Defining dependency "stack" 00:03:05.138 Has header "linux/userfaultfd.h" : YES 00:03:05.138 Has header "linux/vduse.h" : YES 00:03:05.138 Message: lib/vhost: Defining dependency "vhost" 00:03:05.138 Message: lib/ipsec: Defining dependency "ipsec" 00:03:05.138 Message: lib/pdcp: Defining dependency "pdcp" 00:03:05.138 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:05.138 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:05.138 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:05.138 Message: lib/fib: Defining dependency "fib" 00:03:05.138 Message: lib/port: Defining dependency "port" 00:03:05.138 Message: lib/pdump: Defining dependency "pdump" 00:03:05.138 Message: lib/table: Defining dependency "table" 00:03:05.138 Message: lib/pipeline: Defining dependency "pipeline" 00:03:05.138 Message: lib/graph: Defining dependency "graph" 00:03:05.138 Message: lib/node: Defining dependency "node" 00:03:05.138 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:05.138 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:05.138 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:05.138 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:06.514 Compiler for C supports arguments -Wno-sign-compare: YES 00:03:06.514 Compiler for C supports arguments -Wno-unused-value: YES 00:03:06.514 Compiler for C supports arguments -Wno-format: YES 00:03:06.514 Compiler for C supports arguments -Wno-format-security: YES 00:03:06.514 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:03:06.514 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:06.514 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:03:06.514 Compiler for C supports arguments -Wno-unused-parameter: YES 00:03:06.514 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:06.514 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:06.514 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:06.514 Compiler for C supports arguments -mavx512bw: YES (cached) 00:03:06.514 Compiler for C supports arguments -march=skylake-avx512: YES 00:03:06.514 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:03:06.514 Has header "sys/epoll.h" : YES 00:03:06.514 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:06.514 Configuring doxy-api-html.conf using configuration 00:03:06.514 Configuring doxy-api-man.conf using configuration 00:03:06.514 Program mandb found: YES (/usr/bin/mandb) 00:03:06.514 Program sphinx-build found: NO 00:03:06.514 Configuring rte_build_config.h using configuration 00:03:06.514 Message: 00:03:06.514 ================= 00:03:06.514 Applications Enabled 00:03:06.514 ================= 00:03:06.514 00:03:06.514 apps: 00:03:06.514 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:03:06.514 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:03:06.514 test-pmd, test-regex, test-sad, test-security-perf, 00:03:06.514 00:03:06.514 Message: 00:03:06.514 ================= 00:03:06.514 Libraries Enabled 00:03:06.514 ================= 00:03:06.514 00:03:06.514 libs: 00:03:06.514 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:06.514 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:03:06.515 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:03:06.515 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:03:06.515 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:03:06.515 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:03:06.515 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:03:06.515 00:03:06.515 00:03:06.515 Message: 00:03:06.515 =============== 00:03:06.515 Drivers Enabled 00:03:06.515 =============== 00:03:06.515 00:03:06.515 common: 00:03:06.515 00:03:06.515 bus: 00:03:06.515 pci, vdev, 00:03:06.515 mempool: 00:03:06.515 ring, 00:03:06.515 dma: 00:03:06.515 00:03:06.515 net: 00:03:06.515 i40e, 00:03:06.515 raw: 00:03:06.515 00:03:06.515 crypto: 00:03:06.515 00:03:06.515 compress: 00:03:06.515 00:03:06.515 regex: 00:03:06.515 00:03:06.515 ml: 00:03:06.515 00:03:06.515 vdpa: 00:03:06.515 00:03:06.515 event: 00:03:06.515 00:03:06.515 baseband: 00:03:06.515 00:03:06.515 gpu: 00:03:06.515 00:03:06.515 00:03:06.515 Message: 00:03:06.515 ================= 00:03:06.515 Content Skipped 00:03:06.515 ================= 00:03:06.515 00:03:06.515 apps: 00:03:06.515 00:03:06.515 libs: 00:03:06.515 00:03:06.515 drivers: 00:03:06.515 common/cpt: not in enabled drivers build config 00:03:06.515 common/dpaax: not in enabled drivers build config 00:03:06.515 common/iavf: not in enabled drivers build config 00:03:06.515 common/idpf: not in enabled drivers build config 00:03:06.515 common/mvep: not in enabled drivers build config 00:03:06.515 common/octeontx: not in enabled drivers build config 00:03:06.515 bus/auxiliary: not in enabled drivers build config 00:03:06.515 bus/cdx: not in enabled drivers build config 00:03:06.515 bus/dpaa: not in enabled drivers build config 00:03:06.515 bus/fslmc: not in enabled drivers build config 00:03:06.515 bus/ifpga: not in enabled drivers build config 00:03:06.515 bus/platform: not in enabled drivers build config 00:03:06.515 bus/vmbus: not in enabled drivers build config 00:03:06.515 common/cnxk: not in enabled drivers build config 00:03:06.515 common/mlx5: not in enabled drivers build config 00:03:06.515 common/nfp: not in enabled drivers build config 00:03:06.515 common/qat: not in enabled drivers build config 00:03:06.515 common/sfc_efx: not in enabled drivers build config 00:03:06.515 mempool/bucket: not in enabled drivers build config 00:03:06.515 mempool/cnxk: not in enabled drivers build config 00:03:06.515 mempool/dpaa: not in enabled drivers build config 00:03:06.515 mempool/dpaa2: not in enabled drivers build config 00:03:06.515 mempool/octeontx: not in enabled drivers build config 00:03:06.515 mempool/stack: not in enabled drivers build config 00:03:06.515 dma/cnxk: not in enabled drivers build config 00:03:06.515 dma/dpaa: not in enabled drivers build config 00:03:06.515 dma/dpaa2: not in enabled drivers build config 00:03:06.515 dma/hisilicon: not in enabled drivers build config 00:03:06.515 dma/idxd: not in enabled drivers build config 00:03:06.515 dma/ioat: not in enabled drivers build config 00:03:06.515 dma/skeleton: not in enabled drivers build config 00:03:06.515 net/af_packet: not in enabled drivers build config 00:03:06.515 net/af_xdp: not in enabled drivers build config 00:03:06.515 net/ark: not in enabled drivers build config 00:03:06.515 net/atlantic: not in enabled drivers build config 00:03:06.515 net/avp: not in enabled drivers build config 00:03:06.515 net/axgbe: not in enabled drivers build config 00:03:06.515 net/bnx2x: not in enabled drivers build config 00:03:06.515 net/bnxt: not in enabled drivers build config 00:03:06.515 net/bonding: not in enabled drivers build config 00:03:06.515 net/cnxk: not in enabled drivers build config 00:03:06.515 net/cpfl: not in enabled drivers build config 00:03:06.515 net/cxgbe: not in enabled drivers build config 00:03:06.515 net/dpaa: not in enabled drivers build config 00:03:06.515 net/dpaa2: not in enabled drivers build config 00:03:06.515 net/e1000: not in enabled drivers build config 00:03:06.515 net/ena: not in enabled drivers build config 00:03:06.515 net/enetc: not in enabled drivers build config 00:03:06.515 net/enetfec: not in enabled drivers build config 00:03:06.515 net/enic: not in enabled drivers build config 00:03:06.515 net/failsafe: not in enabled drivers build config 00:03:06.515 net/fm10k: not in enabled drivers build config 00:03:06.515 net/gve: not in enabled drivers build config 00:03:06.515 net/hinic: not in enabled drivers build config 00:03:06.515 net/hns3: not in enabled drivers build config 00:03:06.515 net/iavf: not in enabled drivers build config 00:03:06.515 net/ice: not in enabled drivers build config 00:03:06.515 net/idpf: not in enabled drivers build config 00:03:06.515 net/igc: not in enabled drivers build config 00:03:06.515 net/ionic: not in enabled drivers build config 00:03:06.515 net/ipn3ke: not in enabled drivers build config 00:03:06.515 net/ixgbe: not in enabled drivers build config 00:03:06.515 net/mana: not in enabled drivers build config 00:03:06.515 net/memif: not in enabled drivers build config 00:03:06.515 net/mlx4: not in enabled drivers build config 00:03:06.515 net/mlx5: not in enabled drivers build config 00:03:06.515 net/mvneta: not in enabled drivers build config 00:03:06.515 net/mvpp2: not in enabled drivers build config 00:03:06.515 net/netvsc: not in enabled drivers build config 00:03:06.515 net/nfb: not in enabled drivers build config 00:03:06.515 net/nfp: not in enabled drivers build config 00:03:06.515 net/ngbe: not in enabled drivers build config 00:03:06.515 net/null: not in enabled drivers build config 00:03:06.515 net/octeontx: not in enabled drivers build config 00:03:06.515 net/octeon_ep: not in enabled drivers build config 00:03:06.515 net/pcap: not in enabled drivers build config 00:03:06.515 net/pfe: not in enabled drivers build config 00:03:06.515 net/qede: not in enabled drivers build config 00:03:06.515 net/ring: not in enabled drivers build config 00:03:06.515 net/sfc: not in enabled drivers build config 00:03:06.515 net/softnic: not in enabled drivers build config 00:03:06.515 net/tap: not in enabled drivers build config 00:03:06.515 net/thunderx: not in enabled drivers build config 00:03:06.515 net/txgbe: not in enabled drivers build config 00:03:06.515 net/vdev_netvsc: not in enabled drivers build config 00:03:06.515 net/vhost: not in enabled drivers build config 00:03:06.515 net/virtio: not in enabled drivers build config 00:03:06.515 net/vmxnet3: not in enabled drivers build config 00:03:06.515 raw/cnxk_bphy: not in enabled drivers build config 00:03:06.515 raw/cnxk_gpio: not in enabled drivers build config 00:03:06.515 raw/dpaa2_cmdif: not in enabled drivers build config 00:03:06.515 raw/ifpga: not in enabled drivers build config 00:03:06.515 raw/ntb: not in enabled drivers build config 00:03:06.515 raw/skeleton: not in enabled drivers build config 00:03:06.515 crypto/armv8: not in enabled drivers build config 00:03:06.515 crypto/bcmfs: not in enabled drivers build config 00:03:06.515 crypto/caam_jr: not in enabled drivers build config 00:03:06.515 crypto/ccp: not in enabled drivers build config 00:03:06.515 crypto/cnxk: not in enabled drivers build config 00:03:06.515 crypto/dpaa_sec: not in enabled drivers build config 00:03:06.515 crypto/dpaa2_sec: not in enabled drivers build config 00:03:06.515 crypto/ipsec_mb: not in enabled drivers build config 00:03:06.515 crypto/mlx5: not in enabled drivers build config 00:03:06.515 crypto/mvsam: not in enabled drivers build config 00:03:06.515 crypto/nitrox: not in enabled drivers build config 00:03:06.515 crypto/null: not in enabled drivers build config 00:03:06.515 crypto/octeontx: not in enabled drivers build config 00:03:06.515 crypto/openssl: not in enabled drivers build config 00:03:06.515 crypto/scheduler: not in enabled drivers build config 00:03:06.515 crypto/uadk: not in enabled drivers build config 00:03:06.515 crypto/virtio: not in enabled drivers build config 00:03:06.515 compress/isal: not in enabled drivers build config 00:03:06.515 compress/mlx5: not in enabled drivers build config 00:03:06.515 compress/octeontx: not in enabled drivers build config 00:03:06.515 compress/zlib: not in enabled drivers build config 00:03:06.515 regex/mlx5: not in enabled drivers build config 00:03:06.515 regex/cn9k: not in enabled drivers build config 00:03:06.515 ml/cnxk: not in enabled drivers build config 00:03:06.515 vdpa/ifc: not in enabled drivers build config 00:03:06.515 vdpa/mlx5: not in enabled drivers build config 00:03:06.515 vdpa/nfp: not in enabled drivers build config 00:03:06.515 vdpa/sfc: not in enabled drivers build config 00:03:06.515 event/cnxk: not in enabled drivers build config 00:03:06.515 event/dlb2: not in enabled drivers build config 00:03:06.515 event/dpaa: not in enabled drivers build config 00:03:06.515 event/dpaa2: not in enabled drivers build config 00:03:06.515 event/dsw: not in enabled drivers build config 00:03:06.515 event/opdl: not in enabled drivers build config 00:03:06.515 event/skeleton: not in enabled drivers build config 00:03:06.515 event/sw: not in enabled drivers build config 00:03:06.515 event/octeontx: not in enabled drivers build config 00:03:06.515 baseband/acc: not in enabled drivers build config 00:03:06.515 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:03:06.515 baseband/fpga_lte_fec: not in enabled drivers build config 00:03:06.515 baseband/la12xx: not in enabled drivers build config 00:03:06.515 baseband/null: not in enabled drivers build config 00:03:06.515 baseband/turbo_sw: not in enabled drivers build config 00:03:06.515 gpu/cuda: not in enabled drivers build config 00:03:06.515 00:03:06.515 00:03:06.515 Build targets in project: 215 00:03:06.515 00:03:06.515 DPDK 23.11.0 00:03:06.515 00:03:06.515 User defined options 00:03:06.515 libdir : lib 00:03:06.515 prefix : /home/vagrant/spdk_repo/dpdk/build 00:03:06.515 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:03:06.515 c_link_args : 00:03:06.515 enable_docs : false 00:03:06.515 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:03:06.515 enable_kmods : false 00:03:06.515 machine : native 00:03:06.515 tests : false 00:03:06.515 00:03:06.515 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:06.515 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:03:06.515 05:04:59 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:03:06.515 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:06.515 [1/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:06.515 [2/705] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:06.515 [3/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:06.774 [4/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:06.774 [5/705] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:06.774 [6/705] Linking static target lib/librte_kvargs.a 00:03:06.774 [7/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:06.774 [8/705] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:06.774 [9/705] Linking static target lib/librte_log.a 00:03:06.774 [10/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:06.774 [11/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:06.774 [12/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:06.774 [13/705] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.035 [14/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:07.035 [15/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:07.035 [16/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:07.035 [17/705] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.035 [18/705] Linking target lib/librte_log.so.24.0 00:03:07.035 [19/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:07.035 [20/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:07.295 [21/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:07.295 [22/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:07.295 [23/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:07.295 [24/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:07.295 [25/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:07.295 [26/705] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:03:07.295 [27/705] Linking target lib/librte_kvargs.so.24.0 00:03:07.295 [28/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:07.556 [29/705] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:03:07.556 [30/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:07.556 [31/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:07.556 [32/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:07.556 [33/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:07.556 [34/705] Linking static target lib/librte_telemetry.a 00:03:07.556 [35/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:07.556 [36/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:07.556 [37/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:07.556 [38/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:07.817 [39/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:07.817 [40/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:07.817 [41/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:07.817 [42/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:07.817 [43/705] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.817 [44/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:07.817 [45/705] Linking target lib/librte_telemetry.so.24.0 00:03:07.817 [46/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:07.817 [47/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:08.078 [48/705] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:03:08.078 [49/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:08.078 [50/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:08.078 [51/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:08.078 [52/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:08.078 [53/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:08.078 [54/705] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:08.078 [55/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:08.337 [56/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:08.337 [57/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:08.337 [58/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:08.337 [59/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:08.337 [60/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:08.337 [61/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:08.337 [62/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:08.337 [63/705] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:08.337 [64/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:08.337 [65/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:08.337 [66/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:08.597 [67/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:08.597 [68/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:08.597 [69/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:08.597 [70/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:08.597 [71/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:08.597 [72/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:08.597 [73/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:08.597 [74/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:08.597 [75/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:08.857 [76/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:08.857 [77/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:08.857 [78/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:08.857 [79/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:08.857 [80/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:08.857 [81/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:08.857 [82/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:09.118 [83/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:09.118 [84/705] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:09.118 [85/705] Linking static target lib/librte_ring.a 00:03:09.118 [86/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:09.118 [87/705] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:09.118 [88/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:09.118 [89/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:09.118 [90/705] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.379 [91/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:09.379 [92/705] Linking static target lib/librte_eal.a 00:03:09.379 [93/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:09.379 [94/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:09.379 [95/705] Linking static target lib/librte_mempool.a 00:03:09.379 [96/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:09.379 [97/705] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:09.639 [98/705] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:09.640 [99/705] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:09.640 [100/705] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:09.640 [101/705] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:09.640 [102/705] Linking static target lib/librte_rcu.a 00:03:09.640 [103/705] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:09.640 [104/705] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:09.912 [105/705] Linking static target lib/librte_meter.a 00:03:09.912 [106/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:09.912 [107/705] Linking static target lib/librte_mbuf.a 00:03:09.912 [108/705] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:03:09.912 [109/705] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.912 [110/705] Linking static target lib/librte_net.a 00:03:09.912 [111/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:09.912 [112/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:09.912 [113/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:09.912 [114/705] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.912 [115/705] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.170 [116/705] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.170 [117/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:10.170 [118/705] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.170 [119/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:10.430 [120/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:10.690 [121/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:10.690 [122/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:10.690 [123/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:10.690 [124/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:10.690 [125/705] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:10.690 [126/705] Linking static target lib/librte_pci.a 00:03:10.690 [127/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:10.690 [128/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:10.690 [129/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:10.950 [130/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:10.950 [131/705] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.950 [132/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:10.950 [133/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:10.950 [134/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:10.950 [135/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:10.950 [136/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:10.950 [137/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:10.950 [138/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:10.950 [139/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:10.950 [140/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:11.210 [141/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:11.210 [142/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:11.210 [143/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:11.210 [144/705] Linking static target lib/librte_cmdline.a 00:03:11.210 [145/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:11.210 [146/705] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:03:11.210 [147/705] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:03:11.210 [148/705] Linking static target lib/librte_metrics.a 00:03:11.210 [149/705] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:11.471 [150/705] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.471 [151/705] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:11.471 [152/705] Linking static target lib/librte_timer.a 00:03:11.734 [153/705] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:03:11.734 [154/705] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:11.734 [155/705] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.734 [156/705] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.996 [157/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:03:11.996 [158/705] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:03:11.996 [159/705] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:03:12.257 [160/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:03:12.257 [161/705] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:03:12.257 [162/705] Linking static target lib/librte_bitratestats.a 00:03:12.257 [163/705] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:03:12.519 [164/705] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.519 [165/705] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:03:12.519 [166/705] Linking static target lib/librte_bbdev.a 00:03:12.519 [167/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:03:12.780 [168/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:03:12.780 [169/705] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:03:12.780 [170/705] Linking static target lib/acl/libavx2_tmp.a 00:03:12.780 [171/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:03:13.039 [172/705] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.039 [173/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:03:13.039 [174/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:13.039 [175/705] Linking static target lib/librte_ethdev.a 00:03:13.039 [176/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:03:13.298 [177/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:03:13.298 [178/705] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:13.298 [179/705] Linking static target lib/librte_hash.a 00:03:13.298 [180/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:03:13.298 [181/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:03:13.558 [182/705] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:03:13.558 [183/705] Linking static target lib/librte_cfgfile.a 00:03:13.558 [184/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:03:13.558 [185/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:03:13.558 [186/705] Linking static target lib/librte_bpf.a 00:03:13.558 [187/705] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.820 [188/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:13.820 [189/705] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.820 [190/705] Linking target lib/librte_eal.so.24.0 00:03:13.820 [191/705] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.820 [192/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:03:13.820 [193/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:13.820 [194/705] Linking static target lib/librte_acl.a 00:03:13.820 [195/705] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:03:13.820 [196/705] Linking target lib/librte_ring.so.24.0 00:03:13.820 [197/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:13.820 [198/705] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.820 [199/705] Linking target lib/librte_meter.so.24.0 00:03:13.820 [200/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:13.820 [201/705] Linking target lib/librte_pci.so.24.0 00:03:13.820 [202/705] Linking target lib/librte_timer.so.24.0 00:03:14.095 [203/705] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:03:14.095 [204/705] Linking target lib/librte_rcu.so.24.0 00:03:14.095 [205/705] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:03:14.095 [206/705] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:03:14.095 [207/705] Linking target lib/librte_mempool.so.24.0 00:03:14.095 [208/705] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:03:14.095 [209/705] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.095 [210/705] Linking target lib/librte_cfgfile.so.24.0 00:03:14.095 [211/705] Linking static target lib/librte_compressdev.a 00:03:14.095 [212/705] Linking target lib/librte_acl.so.24.0 00:03:14.095 [213/705] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:03:14.095 [214/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:14.095 [215/705] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:03:14.095 [216/705] Linking target lib/librte_mbuf.so.24.0 00:03:14.095 [217/705] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:03:14.095 [218/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:03:14.357 [219/705] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:03:14.357 [220/705] Linking target lib/librte_net.so.24.0 00:03:14.357 [221/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:03:14.357 [222/705] Linking target lib/librte_bbdev.so.24.0 00:03:14.357 [223/705] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:03:14.357 [224/705] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:14.357 [225/705] Linking target lib/librte_cmdline.so.24.0 00:03:14.357 [226/705] Linking target lib/librte_hash.so.24.0 00:03:14.357 [227/705] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.357 [228/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:03:14.357 [229/705] Linking target lib/librte_compressdev.so.24.0 00:03:14.357 [230/705] Linking static target lib/librte_distributor.a 00:03:14.357 [231/705] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:03:14.618 [232/705] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:03:14.618 [233/705] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.618 [234/705] Linking target lib/librte_distributor.so.24.0 00:03:14.618 [235/705] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:14.618 [236/705] Linking static target lib/librte_dmadev.a 00:03:14.880 [237/705] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:03:14.880 [238/705] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.880 [239/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:03:14.880 [240/705] Linking target lib/librte_dmadev.so.24.0 00:03:15.141 [241/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:03:15.141 [242/705] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:03:15.403 [243/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:15.403 [244/705] Linking static target lib/librte_cryptodev.a 00:03:15.403 [245/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:03:15.403 [246/705] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:03:15.403 [247/705] Linking static target lib/librte_efd.a 00:03:15.403 [248/705] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:03:15.403 [249/705] Linking static target lib/librte_dispatcher.a 00:03:15.663 [250/705] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.663 [251/705] Linking target lib/librte_efd.so.24.0 00:03:15.663 [252/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:03:15.663 [253/705] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:03:15.663 [254/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:03:15.663 [255/705] Linking static target lib/librte_gpudev.a 00:03:15.663 [256/705] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:03:15.924 [257/705] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.924 [258/705] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:03:16.184 [259/705] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:03:16.184 [260/705] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.184 [261/705] Linking target lib/librte_cryptodev.so.24.0 00:03:16.184 [262/705] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:03:16.184 [263/705] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:03:16.184 [264/705] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:03:16.184 [265/705] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:03:16.184 [266/705] Linking static target lib/librte_gro.a 00:03:16.446 [267/705] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.446 [268/705] Linking target lib/librte_gpudev.so.24.0 00:03:16.446 [269/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:03:16.446 [270/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:03:16.446 [271/705] Linking static target lib/librte_eventdev.a 00:03:16.446 [272/705] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:03:16.446 [273/705] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.446 [274/705] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:03:16.446 [275/705] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:03:16.446 [276/705] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.446 [277/705] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:03:16.446 [278/705] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:03:16.706 [279/705] Linking target lib/librte_ethdev.so.24.0 00:03:16.706 [280/705] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:03:16.706 [281/705] Linking static target lib/librte_gso.a 00:03:16.706 [282/705] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:03:16.706 [283/705] Linking target lib/librte_metrics.so.24.0 00:03:16.706 [284/705] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.706 [285/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:03:16.706 [286/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:03:16.706 [287/705] Linking target lib/librte_gro.so.24.0 00:03:16.706 [288/705] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:03:16.706 [289/705] Linking target lib/librte_bpf.so.24.0 00:03:16.706 [290/705] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:03:16.706 [291/705] Linking static target lib/librte_jobstats.a 00:03:16.967 [292/705] Linking target lib/librte_gso.so.24.0 00:03:16.967 [293/705] Linking target lib/librte_bitratestats.so.24.0 00:03:16.967 [294/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:03:16.967 [295/705] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:03:16.967 [296/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:03:16.967 [297/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:03:16.967 [298/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:03:16.967 [299/705] Linking static target lib/librte_ip_frag.a 00:03:16.967 [300/705] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.967 [301/705] Linking target lib/librte_jobstats.so.24.0 00:03:17.262 [302/705] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:03:17.262 [303/705] Linking static target lib/librte_latencystats.a 00:03:17.262 [304/705] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.262 [305/705] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:03:17.262 [306/705] Linking target lib/librte_ip_frag.so.24.0 00:03:17.262 [307/705] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:03:17.262 [308/705] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.262 [309/705] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:03:17.262 [310/705] Linking target lib/librte_latencystats.so.24.0 00:03:17.262 [311/705] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:17.523 [312/705] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:17.523 [313/705] Compiling C object lib/librte_member.a.p/member_rte_member_sketch_avx512.c.o 00:03:17.523 [314/705] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:17.523 [315/705] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:03:17.523 [316/705] Linking static target lib/librte_lpm.a 00:03:17.523 [317/705] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:03:17.783 [318/705] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:17.783 [319/705] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:03:17.783 [320/705] Linking static target lib/librte_pcapng.a 00:03:17.783 [321/705] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:03:17.783 [322/705] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:17.783 [323/705] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:17.783 [324/705] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.783 [325/705] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:17.783 [326/705] Linking target lib/librte_lpm.so.24.0 00:03:17.783 [327/705] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.783 [328/705] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.783 [329/705] Linking target lib/librte_eventdev.so.24.0 00:03:18.043 [330/705] Linking target lib/librte_pcapng.so.24.0 00:03:18.043 [331/705] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:03:18.043 [332/705] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:18.043 [333/705] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:18.043 [334/705] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:03:18.043 [335/705] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:03:18.043 [336/705] Linking target lib/librte_dispatcher.so.24.0 00:03:18.043 [337/705] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:18.043 [338/705] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:18.043 [339/705] Linking static target lib/librte_power.a 00:03:18.302 [340/705] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:03:18.302 [341/705] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:03:18.302 [342/705] Linking static target lib/librte_regexdev.a 00:03:18.302 [343/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:03:18.302 [344/705] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:03:18.302 [345/705] Linking static target lib/librte_rawdev.a 00:03:18.302 [346/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:03:18.302 [347/705] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:03:18.302 [348/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:03:18.302 [349/705] Linking static target lib/librte_mldev.a 00:03:18.302 [350/705] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:03:18.302 [351/705] Linking static target lib/librte_member.a 00:03:18.562 [352/705] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.562 [353/705] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.562 [354/705] Linking target lib/librte_rawdev.so.24.0 00:03:18.562 [355/705] Linking target lib/librte_power.so.24.0 00:03:18.562 [356/705] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:03:18.562 [357/705] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.562 [358/705] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:03:18.562 [359/705] Linking target lib/librte_member.so.24.0 00:03:18.562 [360/705] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.562 [361/705] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:18.562 [362/705] Linking static target lib/librte_reorder.a 00:03:18.562 [363/705] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:03:18.821 [364/705] Linking target lib/librte_regexdev.so.24.0 00:03:18.821 [365/705] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:03:18.821 [366/705] Linking static target lib/librte_rib.a 00:03:18.821 [367/705] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:18.821 [368/705] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:03:18.821 [369/705] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.821 [370/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:03:18.821 [371/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:03:18.821 [372/705] Linking target lib/librte_reorder.so.24.0 00:03:18.821 [373/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:03:18.821 [374/705] Linking static target lib/librte_stack.a 00:03:19.082 [375/705] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:03:19.082 [376/705] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.082 [377/705] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:19.082 [378/705] Linking static target lib/librte_security.a 00:03:19.082 [379/705] Linking target lib/librte_rib.so.24.0 00:03:19.082 [380/705] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.082 [381/705] Linking target lib/librte_stack.so.24.0 00:03:19.082 [382/705] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:03:19.082 [383/705] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.082 [384/705] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:19.082 [385/705] Linking target lib/librte_mldev.so.24.0 00:03:19.082 [386/705] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:19.359 [387/705] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.359 [388/705] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:19.359 [389/705] Linking target lib/librte_security.so.24.0 00:03:19.359 [390/705] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:03:19.359 [391/705] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:03:19.359 [392/705] Linking static target lib/librte_sched.a 00:03:19.620 [393/705] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:19.620 [394/705] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:19.620 [395/705] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.620 [396/705] Linking target lib/librte_sched.so.24.0 00:03:19.883 [397/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:19.883 [398/705] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:03:19.883 [399/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:03:19.883 [400/705] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:03:20.144 [401/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:03:20.144 [402/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:03:20.144 [403/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:03:20.144 [404/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:03:20.404 [405/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:03:20.404 [406/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:20.404 [407/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:03:20.404 [408/705] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:03:20.404 [409/705] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:03:20.404 [410/705] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:03:20.404 [411/705] Linking static target lib/librte_ipsec.a 00:03:20.404 [412/705] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:03:20.664 [413/705] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:03:20.664 [414/705] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.664 [415/705] Linking target lib/librte_ipsec.so.24.0 00:03:20.664 [416/705] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:03:20.664 [417/705] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:03:20.664 [418/705] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:03:20.926 [419/705] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:03:21.187 [420/705] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:03:21.187 [421/705] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:03:21.187 [422/705] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:03:21.187 [423/705] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:03:21.187 [424/705] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:03:21.187 [425/705] Linking static target lib/librte_fib.a 00:03:21.187 [426/705] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:03:21.448 [427/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:03:21.448 [428/705] Linking static target lib/librte_pdcp.a 00:03:21.448 [429/705] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.448 [430/705] Linking target lib/librte_fib.so.24.0 00:03:21.707 [431/705] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:03:21.707 [432/705] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.707 [433/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:03:21.707 [434/705] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:03:21.707 [435/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:03:21.707 [436/705] Linking target lib/librte_pdcp.so.24.0 00:03:21.707 [437/705] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:03:21.707 [438/705] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:03:21.965 [439/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:03:22.224 [440/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:03:22.224 [441/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:03:22.224 [442/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:03:22.224 [443/705] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:03:22.224 [444/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:03:22.488 [445/705] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:03:22.488 [446/705] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:03:22.488 [447/705] Linking static target lib/librte_pdump.a 00:03:22.488 [448/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:03:22.488 [449/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:03:22.488 [450/705] Linking static target lib/librte_port.a 00:03:22.488 [451/705] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:03:22.488 [452/705] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.747 [453/705] Linking target lib/librte_pdump.so.24.0 00:03:22.747 [454/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:03:22.747 [455/705] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.747 [456/705] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:03:22.747 [457/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:03:22.747 [458/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:22.747 [459/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:03:22.747 [460/705] Linking target lib/librte_port.so.24.0 00:03:23.005 [461/705] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:03:23.005 [462/705] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:03:23.005 [463/705] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:03:23.263 [464/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:03:23.263 [465/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:03:23.263 [466/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:03:23.263 [467/705] Linking static target lib/librte_table.a 00:03:23.521 [468/705] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:03:23.521 [469/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:03:23.521 [470/705] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.521 [471/705] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:03:23.521 [472/705] Linking target lib/librte_table.so.24.0 00:03:23.779 [473/705] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:03:23.779 [474/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:03:23.779 [475/705] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:03:23.779 [476/705] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:03:24.037 [477/705] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:03:24.037 [478/705] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:03:24.037 [479/705] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:03:24.037 [480/705] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:03:24.037 [481/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:03:24.294 [482/705] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:03:24.294 [483/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:03:24.294 [484/705] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:03:24.552 [485/705] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:03:24.552 [486/705] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:03:24.552 [487/705] Linking static target lib/librte_graph.a 00:03:24.552 [488/705] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:03:24.810 [489/705] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:03:24.810 [490/705] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:03:24.810 [491/705] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.810 [492/705] Linking target lib/librte_graph.so.24.0 00:03:24.810 [493/705] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:03:25.069 [494/705] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:03:25.069 [495/705] Compiling C object lib/librte_node.a.p/node_null.c.o 00:03:25.069 [496/705] Compiling C object lib/librte_node.a.p/node_log.c.o 00:03:25.069 [497/705] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:03:25.069 [498/705] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:03:25.069 [499/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:25.069 [500/705] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:03:25.069 [501/705] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:03:25.326 [502/705] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:03:25.327 [503/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:25.327 [504/705] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:03:25.327 [505/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:25.327 [506/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:25.327 [507/705] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:25.584 [508/705] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:03:25.584 [509/705] Linking static target lib/librte_node.a 00:03:25.584 [510/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:25.584 [511/705] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.584 [512/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:25.584 [513/705] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:25.584 [514/705] Linking target lib/librte_node.so.24.0 00:03:25.842 [515/705] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:25.842 [516/705] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:25.842 [517/705] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:25.842 [518/705] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:25.842 [519/705] Linking static target drivers/librte_bus_pci.a 00:03:25.842 [520/705] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:25.842 [521/705] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:25.842 [522/705] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:25.842 [523/705] Linking static target drivers/librte_bus_vdev.a 00:03:25.842 [524/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:03:25.842 [525/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:03:26.100 [526/705] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:26.100 [527/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:03:26.100 [528/705] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.100 [529/705] Linking target drivers/librte_bus_vdev.so.24.0 00:03:26.100 [530/705] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:26.100 [531/705] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:26.100 [532/705] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.359 [533/705] Linking target drivers/librte_bus_pci.so.24.0 00:03:26.359 [534/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:03:26.359 [535/705] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:03:26.359 [536/705] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:26.359 [537/705] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:26.359 [538/705] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:03:26.359 [539/705] Linking static target drivers/librte_mempool_ring.a 00:03:26.359 [540/705] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:26.359 [541/705] Linking target drivers/librte_mempool_ring.so.24.0 00:03:26.359 [542/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:26.616 [543/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:26.890 [544/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:26.890 [545/705] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:27.167 [546/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:27.425 [547/705] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:03:27.425 [548/705] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:03:27.683 [549/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:27.683 [550/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:27.683 [551/705] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:27.683 [552/705] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:27.683 [553/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:27.941 [554/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:27.941 [555/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:27.941 [556/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:28.198 [557/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:03:28.198 [558/705] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:03:28.198 [559/705] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:28.456 [560/705] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:03:28.456 [561/705] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:03:28.713 [562/705] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:03:28.713 [563/705] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:03:28.713 [564/705] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:03:28.713 [565/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:28.713 [566/705] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:03:28.713 [567/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:28.970 [568/705] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:03:28.970 [569/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:28.970 [570/705] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:03:28.970 [571/705] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:03:28.970 [572/705] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:03:28.970 [573/705] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:03:29.228 [574/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:29.228 [575/705] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:29.228 [576/705] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:29.485 [577/705] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:29.485 [578/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:29.485 [579/705] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:29.485 [580/705] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:29.742 [581/705] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:29.742 [582/705] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:29.742 [583/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:29.742 [584/705] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:29.742 [585/705] Linking static target drivers/librte_net_i40e.a 00:03:29.742 [586/705] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:29.742 [587/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:29.742 [588/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:30.000 [589/705] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:30.258 [590/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:30.258 [591/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:30.258 [592/705] Linking target drivers/librte_net_i40e.so.24.0 00:03:30.258 [593/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:30.258 [594/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:30.258 [595/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:30.258 [596/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:30.517 [597/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:30.775 [598/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:30.775 [599/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:30.775 [600/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:30.775 [601/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:30.775 [602/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:30.775 [603/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:30.775 [604/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:31.033 [605/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:31.033 [606/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:31.033 [607/705] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:31.033 [608/705] Linking static target lib/librte_vhost.a 00:03:31.033 [609/705] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:03:31.033 [610/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:31.033 [611/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:31.290 [612/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:31.290 [613/705] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:03:31.290 [614/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:31.550 [615/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:31.550 [616/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:31.810 [617/705] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:31.810 [618/705] Linking target lib/librte_vhost.so.24.0 00:03:32.069 [619/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:32.069 [620/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:32.069 [621/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:32.069 [622/705] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:32.069 [623/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:32.069 [624/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:32.327 [625/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:32.327 [626/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:03:32.327 [627/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:32.327 [628/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:32.327 [629/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:03:32.327 [630/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:03:32.586 [631/705] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:32.586 [632/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:03:32.586 [633/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:03:32.586 [634/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:03:32.586 [635/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:03:32.586 [636/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:03:32.844 [637/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:03:32.844 [638/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:03:32.844 [639/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:03:32.844 [640/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:32.844 [641/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:33.101 [642/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:33.101 [643/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:33.101 [644/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:33.101 [645/705] Linking static target lib/librte_pipeline.a 00:03:33.101 [646/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:33.101 [647/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:33.359 [648/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:33.359 [649/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:33.359 [650/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:33.359 [651/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:33.359 [652/705] Linking target app/dpdk-dumpcap 00:03:33.618 [653/705] Linking target app/dpdk-graph 00:03:33.618 [654/705] Linking target app/dpdk-test-acl 00:03:33.618 [655/705] Linking target app/dpdk-proc-info 00:03:33.618 [656/705] Linking target app/dpdk-pdump 00:03:33.618 [657/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:33.618 [658/705] Linking target app/dpdk-test-cmdline 00:03:33.876 [659/705] Linking target app/dpdk-test-compress-perf 00:03:33.876 [660/705] Linking target app/dpdk-test-crypto-perf 00:03:33.876 [661/705] Linking target app/dpdk-test-fib 00:03:33.876 [662/705] Linking target app/dpdk-test-dma-perf 00:03:33.876 [663/705] Linking target app/dpdk-test-eventdev 00:03:33.876 [664/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:03:33.876 [665/705] Linking target app/dpdk-test-flow-perf 00:03:33.876 [666/705] Linking target app/dpdk-test-gpudev 00:03:34.136 [667/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:34.136 [668/705] Linking target app/dpdk-test-pipeline 00:03:34.136 [669/705] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:34.136 [670/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:03:34.136 [671/705] Linking target app/dpdk-test-mldev 00:03:34.399 [672/705] Linking target app/dpdk-test-bbdev 00:03:34.399 [673/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:34.399 [674/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:34.399 [675/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:34.657 [676/705] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:34.657 [677/705] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:34.657 [678/705] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:34.915 [679/705] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:34.915 [680/705] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:35.174 [681/705] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:35.174 [682/705] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:03:35.174 [683/705] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:35.174 [684/705] Linking target lib/librte_pipeline.so.24.0 00:03:35.174 [685/705] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:35.174 [686/705] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:35.432 [687/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:35.432 [688/705] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:35.432 [689/705] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:35.432 [690/705] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:35.690 [691/705] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:35.690 [692/705] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:35.949 [693/705] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:35.949 [694/705] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:35.949 [695/705] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:35.949 [696/705] Linking target app/dpdk-test-sad 00:03:35.949 [697/705] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:36.208 [698/705] Linking target app/dpdk-test-regex 00:03:36.208 [699/705] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:36.208 [700/705] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:36.467 [701/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:36.467 [702/705] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:36.725 [703/705] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:36.725 [704/705] Linking target app/dpdk-test-security-perf 00:03:36.983 [705/705] Linking target app/dpdk-testpmd 00:03:36.983 05:05:30 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:03:36.983 05:05:30 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:36.983 05:05:30 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:36.983 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:36.983 [0/1] Installing files. 00:03:37.243 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:37.243 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:37.243 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:37.243 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:37.243 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:37.243 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:37.243 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:37.243 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:37.243 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:37.243 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:37.243 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:37.243 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:37.243 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:37.243 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:37.243 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:37.243 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:37.243 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:37.243 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:37.243 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:37.243 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:37.243 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:37.243 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:37.243 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:37.243 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:37.243 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:37.243 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:37.243 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:37.243 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:37.243 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:37.243 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:37.243 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:37.243 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:37.243 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:37.243 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:37.243 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:37.243 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:37.243 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:37.243 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:37.243 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:37.243 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:37.243 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:37.243 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:37.243 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:37.243 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:37.243 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:37.243 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:37.243 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:37.243 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:37.243 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:37.243 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:37.243 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:37.243 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:37.243 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:37.243 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:37.243 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:37.243 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:37.243 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:37.243 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:37.243 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:37.243 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:37.243 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:37.243 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.243 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.243 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.243 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.243 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.243 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.243 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.243 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.243 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.243 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.243 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.243 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.243 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.243 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.243 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.243 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.243 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.243 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.243 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.243 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.243 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.243 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.243 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.243 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.244 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.244 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.244 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.244 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:37.244 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:37.244 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:37.244 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:37.244 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:37.244 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:37.244 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:37.244 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:37.244 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:37.244 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:37.244 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.244 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.244 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.244 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.244 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.244 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.244 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.244 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.244 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.244 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.244 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.244 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.244 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.244 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.244 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.244 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.244 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.244 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.244 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.244 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.244 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.244 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.244 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.244 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.244 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.244 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.244 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.244 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.244 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.244 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.244 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.244 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.244 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.244 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.244 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.244 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.244 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.244 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.244 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.244 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.244 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.244 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.244 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.244 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.244 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.244 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.244 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.244 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.244 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.244 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.244 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.244 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.244 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.244 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.244 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:37.244 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:37.244 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:37.244 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:37.244 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:37.244 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:37.244 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:37.244 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:37.244 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:37.244 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:37.244 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:37.244 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:37.244 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:37.244 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:37.505 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:37.505 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:37.505 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:37.505 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:37.505 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:37.505 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:37.505 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:37.505 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:37.505 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:37.505 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:37.505 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:37.505 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:37.505 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:37.505 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:37.505 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:37.505 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:37.506 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:37.506 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:37.506 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:37.506 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:37.506 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:37.506 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:37.506 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:37.506 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.506 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.506 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.506 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.506 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.506 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.506 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.506 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.506 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.506 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.506 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.506 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.506 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.506 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.506 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.506 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.506 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.506 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.506 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.506 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.506 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.506 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.506 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.506 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.506 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.506 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.506 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.506 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.506 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.506 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.506 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.506 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.506 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.506 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:37.506 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:37.506 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:37.506 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:37.506 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:37.506 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:37.506 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:37.506 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:37.506 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:37.506 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:37.506 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:37.506 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:37.506 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:37.506 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:37.506 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:37.506 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:37.506 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:37.506 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:37.506 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:37.506 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:37.506 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:37.506 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:37.506 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:37.506 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:37.506 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:37.506 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:37.506 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:37.506 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:37.506 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:37.506 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:37.506 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:37.506 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:37.506 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:37.506 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:37.506 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:37.506 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:37.506 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:37.506 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:37.506 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:37.506 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.506 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.506 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.506 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.506 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.506 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.507 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.507 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.507 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.507 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.507 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.507 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.507 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.507 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.507 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.507 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.507 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.507 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.507 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.507 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.507 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.507 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.507 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.507 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.507 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.507 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.507 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.507 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.507 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.507 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.507 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.507 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.507 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.507 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.507 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.507 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.507 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.507 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.507 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.507 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.507 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.507 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.507 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:37.507 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:37.507 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:37.507 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:37.507 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:37.507 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:37.507 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:37.507 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:37.507 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:37.507 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:37.507 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:37.507 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:37.507 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:37.507 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:37.507 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:37.507 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:37.507 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:37.507 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:37.507 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:37.507 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:37.507 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:37.507 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:37.507 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:37.507 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:37.507 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:37.507 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:37.507 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:37.507 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:37.507 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:37.507 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:37.507 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:37.507 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:37.507 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:37.507 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:37.507 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:37.507 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:37.507 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:37.507 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:37.507 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:37.507 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:37.507 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:37.507 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:37.507 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:37.507 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:37.507 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:37.507 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:37.507 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:37.507 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:37.507 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:37.508 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:37.508 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:37.508 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:37.508 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:37.508 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:37.508 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:37.508 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:37.508 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:37.508 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:37.508 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:37.508 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:37.508 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:37.508 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:37.508 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:37.508 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:37.508 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:37.508 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:37.508 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:37.508 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:37.508 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:37.508 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:37.508 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:37.508 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:37.508 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:37.508 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:37.508 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:37.508 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:37.508 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:37.508 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:37.508 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:37.508 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:37.508 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.508 Installing lib/librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.508 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.508 Installing lib/librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.508 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.508 Installing lib/librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.508 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.508 Installing lib/librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.508 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.508 Installing lib/librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.508 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.508 Installing lib/librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.508 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.508 Installing lib/librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.508 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.508 Installing lib/librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.508 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.508 Installing lib/librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.508 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.508 Installing lib/librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.508 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.508 Installing lib/librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.508 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.508 Installing lib/librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.508 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.508 Installing lib/librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.508 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.508 Installing lib/librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.508 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.508 Installing lib/librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.508 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.508 Installing lib/librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.508 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.508 Installing lib/librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.508 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.508 Installing lib/librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.508 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.508 Installing lib/librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.508 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.508 Installing lib/librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.508 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.508 Installing lib/librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.508 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.508 Installing lib/librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.508 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.508 Installing lib/librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.508 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.508 Installing lib/librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.508 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.508 Installing lib/librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.508 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.508 Installing lib/librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.508 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.508 Installing lib/librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.508 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.508 Installing lib/librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.508 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.508 Installing lib/librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.508 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.508 Installing lib/librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.508 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.508 Installing lib/librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.508 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.508 Installing lib/librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.508 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.508 Installing lib/librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.508 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.508 Installing lib/librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.508 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.508 Installing lib/librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.508 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.508 Installing lib/librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.508 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.508 Installing lib/librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.508 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.508 Installing lib/librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.508 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.508 Installing lib/librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.508 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.508 Installing lib/librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.508 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.508 Installing lib/librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.508 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.508 Installing lib/librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.508 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.508 Installing lib/librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.508 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.508 Installing lib/librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.508 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.509 Installing lib/librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.509 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.509 Installing lib/librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.509 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.509 Installing lib/librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.509 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.509 Installing lib/librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.509 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.509 Installing lib/librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.509 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.509 Installing lib/librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.509 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.509 Installing lib/librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.509 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.509 Installing lib/librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.509 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.509 Installing lib/librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.509 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.509 Installing lib/librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.509 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.509 Installing lib/librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.770 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.770 Installing lib/librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.770 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.770 Installing drivers/librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:37.770 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.770 Installing drivers/librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:37.770 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.770 Installing drivers/librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:37.770 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.770 Installing drivers/librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:37.770 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.770 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.770 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.770 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.770 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.770 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.770 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.770 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.771 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.771 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.771 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.771 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.771 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.771 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.771 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.771 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.771 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.771 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.771 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.771 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.771 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.772 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.773 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:37.774 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:37.774 Installing symlink pointing to librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24 00:03:37.774 Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:03:37.774 Installing symlink pointing to librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24 00:03:37.774 Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:37.774 Installing symlink pointing to librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24 00:03:37.774 Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:37.774 Installing symlink pointing to librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24 00:03:37.774 Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:37.774 Installing symlink pointing to librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24 00:03:37.774 Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:37.774 Installing symlink pointing to librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24 00:03:37.774 Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:37.774 Installing symlink pointing to librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24 00:03:37.774 Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:37.774 Installing symlink pointing to librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24 00:03:37.774 Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:37.774 Installing symlink pointing to librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24 00:03:37.774 Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:37.774 Installing symlink pointing to librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24 00:03:37.774 Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:37.774 Installing symlink pointing to librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24 00:03:37.774 Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:37.774 Installing symlink pointing to librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24 00:03:37.774 Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:37.774 Installing symlink pointing to librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24 00:03:37.774 Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:37.774 Installing symlink pointing to librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24 00:03:37.774 Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:37.774 Installing symlink pointing to librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24 00:03:37.774 Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:37.774 Installing symlink pointing to librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24 00:03:37.774 Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:37.774 Installing symlink pointing to librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24 00:03:37.774 Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:37.774 Installing symlink pointing to librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24 00:03:37.774 Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:37.774 Installing symlink pointing to librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24 00:03:37.774 Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:37.774 Installing symlink pointing to librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24 00:03:37.774 Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:37.774 Installing symlink pointing to librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24 00:03:37.774 Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:37.774 Installing symlink pointing to librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24 00:03:37.774 Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:37.774 Installing symlink pointing to librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24 00:03:37.774 Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:37.774 Installing symlink pointing to librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24 00:03:37.774 Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:37.774 Installing symlink pointing to librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24 00:03:37.774 Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:37.774 Installing symlink pointing to librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24 00:03:37.774 Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:37.774 Installing symlink pointing to librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24 00:03:37.774 Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:37.774 Installing symlink pointing to librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24 00:03:37.774 Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:03:37.774 Installing symlink pointing to librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24 00:03:37.774 Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:37.774 Installing symlink pointing to librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24 00:03:37.774 Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:37.774 Installing symlink pointing to librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24 00:03:37.774 Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:37.774 Installing symlink pointing to librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24 00:03:37.774 Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:37.774 Installing symlink pointing to librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24 00:03:37.774 Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:37.774 Installing symlink pointing to librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24 00:03:37.774 Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:37.774 Installing symlink pointing to librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24 00:03:37.774 Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:37.774 Installing symlink pointing to librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24 00:03:37.774 Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:37.774 Installing symlink pointing to librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24 00:03:37.774 Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:37.774 Installing symlink pointing to librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24 00:03:37.774 Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:37.774 Installing symlink pointing to librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24 00:03:37.774 Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:37.774 Installing symlink pointing to librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24 00:03:37.774 Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:37.774 Installing symlink pointing to librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24 00:03:37.774 Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:03:37.774 Installing symlink pointing to librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24 00:03:37.774 Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:37.774 Installing symlink pointing to librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24 00:03:37.774 Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:37.774 Installing symlink pointing to librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24 00:03:37.774 Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:37.774 Installing symlink pointing to librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24 00:03:37.774 Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:37.774 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:03:37.775 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:03:37.775 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:03:37.775 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:03:37.775 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:03:37.775 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:03:37.775 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:03:37.775 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:03:37.775 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:03:37.775 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:03:37.775 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:03:37.775 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:03:37.775 Installing symlink pointing to librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24 00:03:37.775 Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:37.775 Installing symlink pointing to librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24 00:03:37.775 Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:37.775 Installing symlink pointing to librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24 00:03:37.775 Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:37.775 Installing symlink pointing to librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24 00:03:37.775 Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:03:37.775 Installing symlink pointing to librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24 00:03:37.775 Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:37.775 Installing symlink pointing to librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24 00:03:37.775 Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:37.775 Installing symlink pointing to librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24 00:03:37.775 Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:37.775 Installing symlink pointing to librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24 00:03:37.775 Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:37.775 Installing symlink pointing to librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24 00:03:37.775 Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:37.775 Installing symlink pointing to librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24 00:03:37.775 Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:37.775 Installing symlink pointing to librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24 00:03:37.775 Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:37.775 Installing symlink pointing to librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:03:37.775 Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:03:37.775 Installing symlink pointing to librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:03:37.775 Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:03:37.775 Installing symlink pointing to librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:03:37.775 Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:03:37.775 Installing symlink pointing to librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:03:37.775 Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:03:37.775 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:03:37.775 05:05:30 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:03:37.775 ************************************ 00:03:37.775 END TEST build_native_dpdk 00:03:37.775 ************************************ 00:03:37.775 05:05:30 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:37.775 00:03:37.775 real 0m37.100s 00:03:37.775 user 4m16.877s 00:03:37.775 sys 0m37.450s 00:03:37.775 05:05:30 build_native_dpdk -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:37.775 05:05:30 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:37.775 05:05:30 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:37.775 05:05:30 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:37.775 05:05:30 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:37.775 05:05:30 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:37.775 05:05:30 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:37.775 05:05:30 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:37.775 05:05:30 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:37.775 05:05:30 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme --with-shared 00:03:38.034 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:38.034 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.034 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:38.034 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:38.599 Using 'verbs' RDMA provider 00:03:49.505 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:59.483 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:59.483 Creating mk/config.mk...done. 00:03:59.483 Creating mk/cc.flags.mk...done. 00:03:59.483 Type 'make' to build. 00:03:59.483 05:05:52 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:59.483 05:05:52 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:59.483 05:05:52 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:59.483 05:05:52 -- common/autotest_common.sh@10 -- $ set +x 00:03:59.483 ************************************ 00:03:59.483 START TEST make 00:03:59.483 ************************************ 00:03:59.483 05:05:52 make -- common/autotest_common.sh@1125 -- $ make -j10 00:03:59.742 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:03:59.742 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:03:59.742 meson setup builddir \ 00:03:59.742 -Dwith-libaio=enabled \ 00:03:59.742 -Dwith-liburing=enabled \ 00:03:59.742 -Dwith-libvfn=disabled \ 00:03:59.742 -Dwith-spdk=false && \ 00:03:59.742 meson compile -C builddir && \ 00:03:59.742 cd -) 00:03:59.742 make[1]: Nothing to be done for 'all'. 00:04:01.644 The Meson build system 00:04:01.644 Version: 1.5.0 00:04:01.644 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:04:01.644 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:04:01.644 Build type: native build 00:04:01.644 Project name: xnvme 00:04:01.644 Project version: 0.7.3 00:04:01.644 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:04:01.644 C linker for the host machine: gcc ld.bfd 2.40-14 00:04:01.644 Host machine cpu family: x86_64 00:04:01.644 Host machine cpu: x86_64 00:04:01.644 Message: host_machine.system: linux 00:04:01.644 Compiler for C supports arguments -Wno-missing-braces: YES 00:04:01.644 Compiler for C supports arguments -Wno-cast-function-type: YES 00:04:01.644 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:04:01.644 Run-time dependency threads found: YES 00:04:01.644 Has header "setupapi.h" : NO 00:04:01.644 Has header "linux/blkzoned.h" : YES 00:04:01.644 Has header "linux/blkzoned.h" : YES (cached) 00:04:01.644 Has header "libaio.h" : YES 00:04:01.644 Library aio found: YES 00:04:01.644 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:01.644 Run-time dependency liburing found: YES 2.2 00:04:01.644 Dependency libvfn skipped: feature with-libvfn disabled 00:04:01.644 Run-time dependency appleframeworks found: NO (tried framework) 00:04:01.644 Run-time dependency appleframeworks found: NO (tried framework) 00:04:01.644 Configuring xnvme_config.h using configuration 00:04:01.645 Configuring xnvme.spec using configuration 00:04:01.645 Run-time dependency bash-completion found: YES 2.11 00:04:01.645 Message: Bash-completions: /usr/share/bash-completion/completions 00:04:01.645 Program cp found: YES (/usr/bin/cp) 00:04:01.645 Has header "winsock2.h" : NO 00:04:01.645 Has header "dbghelp.h" : NO 00:04:01.645 Library rpcrt4 found: NO 00:04:01.645 Library rt found: YES 00:04:01.645 Checking for function "clock_gettime" with dependency -lrt: YES 00:04:01.645 Found CMake: /usr/bin/cmake (3.27.7) 00:04:01.645 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:04:01.645 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:04:01.645 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:04:01.645 Build targets in project: 32 00:04:01.645 00:04:01.645 xnvme 0.7.3 00:04:01.645 00:04:01.645 User defined options 00:04:01.645 with-libaio : enabled 00:04:01.645 with-liburing: enabled 00:04:01.645 with-libvfn : disabled 00:04:01.645 with-spdk : false 00:04:01.645 00:04:01.645 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:01.903 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:04:01.903 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:04:01.903 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:04:01.903 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:04:01.903 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:04:01.903 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:04:01.903 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:04:01.903 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:04:01.903 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:04:01.903 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:04:01.903 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:04:01.903 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:04:01.903 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:04:02.164 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:04:02.164 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:04:02.164 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:04:02.164 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:04:02.164 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:04:02.164 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:04:02.164 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:04:02.164 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:04:02.164 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:04:02.164 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:04:02.164 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:04:02.164 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:04:02.164 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:04:02.164 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:04:02.164 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:04:02.164 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:04:02.164 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:04:02.164 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:04:02.164 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:04:02.164 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:04:02.164 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:04:02.164 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:04:02.164 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:04:02.164 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:04:02.164 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:04:02.164 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:04:02.164 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:04:02.164 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:04:02.164 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:04:02.164 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:04:02.164 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:04:02.164 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:04:02.164 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:04:02.164 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:04:02.164 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:04:02.164 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:04:02.164 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:04:02.164 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:04:02.164 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:04:02.424 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:04:02.424 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:04:02.424 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:04:02.424 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:04:02.424 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:04:02.424 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:04:02.424 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:04:02.424 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:04:02.424 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:04:02.424 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:04:02.424 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:04:02.424 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:04:02.424 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:04:02.424 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:04:02.424 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:04:02.424 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:04:02.424 [68/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:04:02.424 [69/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:04:02.424 [70/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:04:02.424 [71/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:04:02.684 [72/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:04:02.684 [73/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:04:02.684 [74/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:04:02.684 [75/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:04:02.684 [76/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:04:02.684 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:04:02.684 [78/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:04:02.684 [79/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:04:02.684 [80/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:04:02.684 [81/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:04:02.684 [82/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:04:02.684 [83/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:04:02.684 [84/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:04:02.684 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:04:02.684 [86/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:04:02.684 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:04:02.684 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:04:02.684 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:04:02.684 [90/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:04:02.684 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:04:02.684 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:04:02.684 [93/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:04:02.942 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:04:02.942 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:04:02.942 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:04:02.942 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:04:02.942 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:04:02.942 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:04:02.942 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:04:02.942 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:04:02.942 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:04:02.942 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:04:02.942 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:04:02.942 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:04:02.942 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:04:02.942 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:04:02.942 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:04:02.942 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:04:02.942 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:04:02.942 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:04:02.942 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:04:02.942 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:04:02.942 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:04:02.942 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:04:02.942 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:04:02.942 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:04:02.942 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:04:02.942 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:04:02.942 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:04:02.942 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:04:02.942 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:04:02.942 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:04:02.942 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:04:02.942 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:04:02.942 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:04:02.942 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:04:02.942 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:04:02.942 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:04:03.203 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:04:03.203 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:04:03.203 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:04:03.203 [133/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:04:03.203 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:04:03.203 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:04:03.203 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:04:03.203 [137/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:04:03.203 [138/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:04:03.203 [139/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:04:03.203 [140/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:04:03.203 [141/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:04:03.203 [142/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:04:03.203 [143/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:04:03.203 [144/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:04:03.203 [145/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:04:03.465 [146/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:04:03.465 [147/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:04:03.465 [148/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:04:03.465 [149/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:04:03.465 [150/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:04:03.465 [151/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:04:03.465 [152/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:04:03.465 [153/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:04:03.465 [154/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:04:03.465 [155/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:04:03.465 [156/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:04:03.465 [157/203] Linking target lib/libxnvme.so 00:04:03.465 [158/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:04:03.465 [159/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:04:03.465 [160/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:04:03.465 [161/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:04:03.465 [162/203] Compiling C object tools/xdd.p/xdd.c.o 00:04:03.465 [163/203] Compiling C object tools/lblk.p/lblk.c.o 00:04:03.465 [164/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:04:03.465 [165/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:04:03.465 [166/203] Compiling C object tools/kvs.p/kvs.c.o 00:04:03.726 [167/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:04:03.726 [168/203] Compiling C object tools/zoned.p/zoned.c.o 00:04:03.726 [169/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:04:03.726 [170/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:04:03.726 [171/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:04:03.726 [172/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:04:03.726 [173/203] Linking static target lib/libxnvme.a 00:04:03.726 [174/203] Linking target tests/xnvme_tests_buf 00:04:03.726 [175/203] Linking target tests/xnvme_tests_async_intf 00:04:03.726 [176/203] Linking target tests/xnvme_tests_lblk 00:04:03.726 [177/203] Linking target tests/xnvme_tests_enum 00:04:03.726 [178/203] Linking target tests/xnvme_tests_ioworker 00:04:03.726 [179/203] Linking target tests/xnvme_tests_scc 00:04:03.726 [180/203] Linking target tests/xnvme_tests_xnvme_cli 00:04:03.726 [181/203] Linking target tests/xnvme_tests_xnvme_file 00:04:03.726 [182/203] Linking target tests/xnvme_tests_znd_explicit_open 00:04:03.726 [183/203] Linking target tests/xnvme_tests_cli 00:04:03.726 [184/203] Linking target tests/xnvme_tests_znd_append 00:04:03.726 [185/203] Linking target tests/xnvme_tests_znd_state 00:04:03.726 [186/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:04:03.726 [187/203] Linking target tests/xnvme_tests_znd_zrwa 00:04:03.726 [188/203] Linking target tests/xnvme_tests_kvs 00:04:03.726 [189/203] Linking target tools/xnvme_file 00:04:03.726 [190/203] Linking target tools/xdd 00:04:03.726 [191/203] Linking target tests/xnvme_tests_map 00:04:03.726 [192/203] Linking target tools/lblk 00:04:03.726 [193/203] Linking target examples/xnvme_hello 00:04:03.726 [194/203] Linking target examples/xnvme_single_async 00:04:03.726 [195/203] Linking target examples/xnvme_dev 00:04:03.726 [196/203] Linking target tools/kvs 00:04:03.726 [197/203] Linking target examples/xnvme_enum 00:04:03.726 [198/203] Linking target tools/xnvme 00:04:03.726 [199/203] Linking target examples/xnvme_io_async 00:04:03.726 [200/203] Linking target tools/zoned 00:04:03.726 [201/203] Linking target examples/xnvme_single_sync 00:04:03.984 [202/203] Linking target examples/zoned_io_async 00:04:03.984 [203/203] Linking target examples/zoned_io_sync 00:04:03.984 INFO: autodetecting backend as ninja 00:04:03.984 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:04:03.984 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:04:36.068 CC lib/ut/ut.o 00:04:36.068 CC lib/ut_mock/mock.o 00:04:36.068 CC lib/log/log.o 00:04:36.068 CC lib/log/log_deprecated.o 00:04:36.068 CC lib/log/log_flags.o 00:04:36.068 LIB libspdk_ut_mock.a 00:04:36.068 LIB libspdk_log.a 00:04:36.068 LIB libspdk_ut.a 00:04:36.068 SO libspdk_ut_mock.so.6.0 00:04:36.068 SO libspdk_ut.so.2.0 00:04:36.068 SO libspdk_log.so.7.0 00:04:36.068 SYMLINK libspdk_ut.so 00:04:36.068 SYMLINK libspdk_ut_mock.so 00:04:36.068 SYMLINK libspdk_log.so 00:04:36.068 CC lib/dma/dma.o 00:04:36.068 CC lib/util/base64.o 00:04:36.068 CC lib/util/bit_array.o 00:04:36.068 CC lib/util/cpuset.o 00:04:36.068 CC lib/util/crc32.o 00:04:36.068 CC lib/ioat/ioat.o 00:04:36.068 CC lib/util/crc16.o 00:04:36.068 CC lib/util/crc32c.o 00:04:36.068 CXX lib/trace_parser/trace.o 00:04:36.068 CC lib/vfio_user/host/vfio_user_pci.o 00:04:36.068 CC lib/vfio_user/host/vfio_user.o 00:04:36.068 CC lib/util/crc32_ieee.o 00:04:36.068 CC lib/util/crc64.o 00:04:36.068 CC lib/util/dif.o 00:04:36.068 LIB libspdk_dma.a 00:04:36.068 SO libspdk_dma.so.5.0 00:04:36.068 CC lib/util/fd.o 00:04:36.068 CC lib/util/fd_group.o 00:04:36.068 CC lib/util/file.o 00:04:36.068 SYMLINK libspdk_dma.so 00:04:36.068 CC lib/util/hexlify.o 00:04:36.068 CC lib/util/iov.o 00:04:36.068 CC lib/util/math.o 00:04:36.068 CC lib/util/net.o 00:04:36.068 LIB libspdk_vfio_user.a 00:04:36.068 CC lib/util/pipe.o 00:04:36.068 LIB libspdk_ioat.a 00:04:36.068 SO libspdk_vfio_user.so.5.0 00:04:36.068 CC lib/util/strerror_tls.o 00:04:36.068 SO libspdk_ioat.so.7.0 00:04:36.068 CC lib/util/string.o 00:04:36.068 CC lib/util/uuid.o 00:04:36.068 CC lib/util/xor.o 00:04:36.068 SYMLINK libspdk_vfio_user.so 00:04:36.068 SYMLINK libspdk_ioat.so 00:04:36.068 CC lib/util/zipf.o 00:04:36.068 CC lib/util/md5.o 00:04:36.068 LIB libspdk_util.a 00:04:36.068 SO libspdk_util.so.10.0 00:04:36.068 LIB libspdk_trace_parser.a 00:04:36.068 SO libspdk_trace_parser.so.6.0 00:04:36.068 SYMLINK libspdk_util.so 00:04:36.068 SYMLINK libspdk_trace_parser.so 00:04:36.068 CC lib/rdma_utils/rdma_utils.o 00:04:36.068 CC lib/vmd/vmd.o 00:04:36.068 CC lib/idxd/idxd.o 00:04:36.068 CC lib/json/json_parse.o 00:04:36.068 CC lib/vmd/led.o 00:04:36.068 CC lib/json/json_util.o 00:04:36.068 CC lib/json/json_write.o 00:04:36.068 CC lib/env_dpdk/env.o 00:04:36.068 CC lib/rdma_provider/common.o 00:04:36.068 CC lib/conf/conf.o 00:04:36.068 CC lib/env_dpdk/memory.o 00:04:36.068 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:36.068 CC lib/env_dpdk/pci.o 00:04:36.068 LIB libspdk_conf.a 00:04:36.068 CC lib/idxd/idxd_user.o 00:04:36.068 SO libspdk_conf.so.6.0 00:04:36.068 LIB libspdk_rdma_utils.a 00:04:36.068 LIB libspdk_json.a 00:04:36.068 SO libspdk_rdma_utils.so.1.0 00:04:36.068 SYMLINK libspdk_conf.so 00:04:36.068 SO libspdk_json.so.6.0 00:04:36.068 LIB libspdk_rdma_provider.a 00:04:36.068 CC lib/idxd/idxd_kernel.o 00:04:36.068 SYMLINK libspdk_rdma_utils.so 00:04:36.068 CC lib/env_dpdk/init.o 00:04:36.068 SO libspdk_rdma_provider.so.6.0 00:04:36.068 SYMLINK libspdk_json.so 00:04:36.068 CC lib/env_dpdk/threads.o 00:04:36.069 SYMLINK libspdk_rdma_provider.so 00:04:36.069 CC lib/env_dpdk/pci_ioat.o 00:04:36.069 CC lib/env_dpdk/pci_virtio.o 00:04:36.069 CC lib/env_dpdk/pci_vmd.o 00:04:36.069 CC lib/env_dpdk/pci_idxd.o 00:04:36.069 CC lib/env_dpdk/pci_event.o 00:04:36.069 CC lib/env_dpdk/sigbus_handler.o 00:04:36.069 CC lib/jsonrpc/jsonrpc_server.o 00:04:36.069 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:36.069 CC lib/env_dpdk/pci_dpdk.o 00:04:36.069 LIB libspdk_idxd.a 00:04:36.069 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:36.069 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:36.069 LIB libspdk_vmd.a 00:04:36.069 SO libspdk_idxd.so.12.1 00:04:36.069 SO libspdk_vmd.so.6.0 00:04:36.069 CC lib/jsonrpc/jsonrpc_client.o 00:04:36.069 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:36.069 SYMLINK libspdk_idxd.so 00:04:36.069 SYMLINK libspdk_vmd.so 00:04:36.069 LIB libspdk_jsonrpc.a 00:04:36.069 SO libspdk_jsonrpc.so.6.0 00:04:36.069 SYMLINK libspdk_jsonrpc.so 00:04:36.069 CC lib/rpc/rpc.o 00:04:36.069 LIB libspdk_env_dpdk.a 00:04:36.069 LIB libspdk_rpc.a 00:04:36.069 SO libspdk_rpc.so.6.0 00:04:36.069 SO libspdk_env_dpdk.so.15.0 00:04:36.069 SYMLINK libspdk_rpc.so 00:04:36.069 SYMLINK libspdk_env_dpdk.so 00:04:36.069 CC lib/notify/notify.o 00:04:36.069 CC lib/notify/notify_rpc.o 00:04:36.069 CC lib/keyring/keyring.o 00:04:36.069 CC lib/trace/trace.o 00:04:36.069 CC lib/keyring/keyring_rpc.o 00:04:36.069 CC lib/trace/trace_rpc.o 00:04:36.069 CC lib/trace/trace_flags.o 00:04:36.069 LIB libspdk_notify.a 00:04:36.069 LIB libspdk_keyring.a 00:04:36.069 SO libspdk_notify.so.6.0 00:04:36.069 SO libspdk_keyring.so.2.0 00:04:36.069 LIB libspdk_trace.a 00:04:36.069 SYMLINK libspdk_notify.so 00:04:36.069 SO libspdk_trace.so.11.0 00:04:36.069 SYMLINK libspdk_keyring.so 00:04:36.069 SYMLINK libspdk_trace.so 00:04:36.069 CC lib/thread/thread.o 00:04:36.069 CC lib/thread/iobuf.o 00:04:36.069 CC lib/sock/sock_rpc.o 00:04:36.069 CC lib/sock/sock.o 00:04:36.327 LIB libspdk_sock.a 00:04:36.327 SO libspdk_sock.so.10.0 00:04:36.327 SYMLINK libspdk_sock.so 00:04:36.586 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:36.586 CC lib/nvme/nvme_ctrlr.o 00:04:36.586 CC lib/nvme/nvme_ns_cmd.o 00:04:36.586 CC lib/nvme/nvme_fabric.o 00:04:36.586 CC lib/nvme/nvme_pcie_common.o 00:04:36.586 CC lib/nvme/nvme_qpair.o 00:04:36.586 CC lib/nvme/nvme.o 00:04:36.586 CC lib/nvme/nvme_ns.o 00:04:36.586 CC lib/nvme/nvme_pcie.o 00:04:37.151 CC lib/nvme/nvme_quirks.o 00:04:37.410 CC lib/nvme/nvme_transport.o 00:04:37.410 CC lib/nvme/nvme_discovery.o 00:04:37.410 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:37.410 LIB libspdk_thread.a 00:04:37.410 SO libspdk_thread.so.10.1 00:04:37.410 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:37.410 CC lib/nvme/nvme_tcp.o 00:04:37.410 SYMLINK libspdk_thread.so 00:04:37.410 CC lib/nvme/nvme_opal.o 00:04:37.410 CC lib/nvme/nvme_io_msg.o 00:04:37.667 CC lib/nvme/nvme_poll_group.o 00:04:37.667 CC lib/nvme/nvme_zns.o 00:04:37.667 CC lib/nvme/nvme_stubs.o 00:04:37.667 CC lib/nvme/nvme_auth.o 00:04:37.926 CC lib/nvme/nvme_cuse.o 00:04:37.926 CC lib/nvme/nvme_rdma.o 00:04:38.185 CC lib/accel/accel.o 00:04:38.185 CC lib/accel/accel_rpc.o 00:04:38.185 CC lib/blob/blobstore.o 00:04:38.185 CC lib/blob/request.o 00:04:38.185 CC lib/init/json_config.o 00:04:38.443 CC lib/init/subsystem.o 00:04:38.443 CC lib/init/subsystem_rpc.o 00:04:38.443 CC lib/virtio/virtio.o 00:04:38.701 CC lib/fsdev/fsdev.o 00:04:38.701 CC lib/init/rpc.o 00:04:38.701 CC lib/fsdev/fsdev_io.o 00:04:38.701 CC lib/fsdev/fsdev_rpc.o 00:04:38.701 CC lib/blob/zeroes.o 00:04:38.701 LIB libspdk_init.a 00:04:38.701 SO libspdk_init.so.6.0 00:04:38.958 CC lib/virtio/virtio_vhost_user.o 00:04:38.958 CC lib/blob/blob_bs_dev.o 00:04:38.958 SYMLINK libspdk_init.so 00:04:38.958 CC lib/virtio/virtio_vfio_user.o 00:04:38.958 CC lib/virtio/virtio_pci.o 00:04:38.958 CC lib/accel/accel_sw.o 00:04:39.215 CC lib/event/app.o 00:04:39.215 CC lib/event/log_rpc.o 00:04:39.215 CC lib/event/reactor.o 00:04:39.215 CC lib/event/app_rpc.o 00:04:39.215 CC lib/event/scheduler_static.o 00:04:39.215 LIB libspdk_virtio.a 00:04:39.215 SO libspdk_virtio.so.7.0 00:04:39.215 LIB libspdk_fsdev.a 00:04:39.215 LIB libspdk_accel.a 00:04:39.215 SO libspdk_fsdev.so.1.0 00:04:39.215 SYMLINK libspdk_virtio.so 00:04:39.215 LIB libspdk_nvme.a 00:04:39.215 SO libspdk_accel.so.16.0 00:04:39.215 SYMLINK libspdk_fsdev.so 00:04:39.472 SYMLINK libspdk_accel.so 00:04:39.472 SO libspdk_nvme.so.14.0 00:04:39.472 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:39.472 CC lib/bdev/bdev.o 00:04:39.472 LIB libspdk_event.a 00:04:39.472 CC lib/bdev/bdev_rpc.o 00:04:39.472 CC lib/bdev/scsi_nvme.o 00:04:39.472 CC lib/bdev/part.o 00:04:39.472 CC lib/bdev/bdev_zone.o 00:04:39.731 SO libspdk_event.so.14.0 00:04:39.731 SYMLINK libspdk_nvme.so 00:04:39.731 SYMLINK libspdk_event.so 00:04:40.297 LIB libspdk_fuse_dispatcher.a 00:04:40.297 SO libspdk_fuse_dispatcher.so.1.0 00:04:40.297 SYMLINK libspdk_fuse_dispatcher.so 00:04:41.232 LIB libspdk_blob.a 00:04:41.232 SO libspdk_blob.so.11.0 00:04:41.232 SYMLINK libspdk_blob.so 00:04:41.490 CC lib/blobfs/tree.o 00:04:41.490 CC lib/blobfs/blobfs.o 00:04:41.490 CC lib/lvol/lvol.o 00:04:42.425 LIB libspdk_blobfs.a 00:04:42.425 SO libspdk_blobfs.so.10.0 00:04:42.425 LIB libspdk_lvol.a 00:04:42.425 LIB libspdk_bdev.a 00:04:42.425 SYMLINK libspdk_blobfs.so 00:04:42.425 SO libspdk_lvol.so.10.0 00:04:42.425 SO libspdk_bdev.so.16.0 00:04:42.425 SYMLINK libspdk_lvol.so 00:04:42.425 SYMLINK libspdk_bdev.so 00:04:42.684 CC lib/scsi/dev.o 00:04:42.684 CC lib/scsi/port.o 00:04:42.684 CC lib/scsi/lun.o 00:04:42.684 CC lib/ftl/ftl_core.o 00:04:42.684 CC lib/scsi/scsi.o 00:04:42.684 CC lib/ftl/ftl_init.o 00:04:42.684 CC lib/scsi/scsi_bdev.o 00:04:42.684 CC lib/nbd/nbd.o 00:04:42.684 CC lib/nvmf/ctrlr.o 00:04:42.684 CC lib/ublk/ublk.o 00:04:42.684 CC lib/ublk/ublk_rpc.o 00:04:42.684 CC lib/scsi/scsi_pr.o 00:04:42.942 CC lib/nvmf/ctrlr_discovery.o 00:04:42.942 CC lib/nvmf/ctrlr_bdev.o 00:04:42.942 CC lib/ftl/ftl_layout.o 00:04:42.942 CC lib/ftl/ftl_debug.o 00:04:42.942 CC lib/nbd/nbd_rpc.o 00:04:42.942 CC lib/nvmf/subsystem.o 00:04:43.200 CC lib/nvmf/nvmf.o 00:04:43.200 CC lib/scsi/scsi_rpc.o 00:04:43.200 LIB libspdk_nbd.a 00:04:43.200 CC lib/scsi/task.o 00:04:43.200 SO libspdk_nbd.so.7.0 00:04:43.200 CC lib/ftl/ftl_io.o 00:04:43.200 SYMLINK libspdk_nbd.so 00:04:43.200 CC lib/nvmf/nvmf_rpc.o 00:04:43.200 LIB libspdk_ublk.a 00:04:43.200 CC lib/nvmf/transport.o 00:04:43.200 CC lib/nvmf/tcp.o 00:04:43.200 SO libspdk_ublk.so.3.0 00:04:43.458 LIB libspdk_scsi.a 00:04:43.458 SYMLINK libspdk_ublk.so 00:04:43.458 CC lib/nvmf/stubs.o 00:04:43.458 SO libspdk_scsi.so.9.0 00:04:43.458 SYMLINK libspdk_scsi.so 00:04:43.458 CC lib/nvmf/mdns_server.o 00:04:43.458 CC lib/ftl/ftl_sb.o 00:04:43.458 CC lib/nvmf/rdma.o 00:04:43.717 CC lib/ftl/ftl_l2p.o 00:04:43.717 CC lib/nvmf/auth.o 00:04:43.975 CC lib/ftl/ftl_l2p_flat.o 00:04:43.975 CC lib/ftl/ftl_nv_cache.o 00:04:43.975 CC lib/ftl/ftl_band.o 00:04:43.975 CC lib/ftl/ftl_band_ops.o 00:04:43.975 CC lib/ftl/ftl_writer.o 00:04:44.233 CC lib/iscsi/conn.o 00:04:44.233 CC lib/vhost/vhost.o 00:04:44.233 CC lib/vhost/vhost_rpc.o 00:04:44.233 CC lib/vhost/vhost_scsi.o 00:04:44.233 CC lib/ftl/ftl_rq.o 00:04:44.491 CC lib/vhost/vhost_blk.o 00:04:44.491 CC lib/vhost/rte_vhost_user.o 00:04:44.749 CC lib/iscsi/init_grp.o 00:04:44.749 CC lib/iscsi/iscsi.o 00:04:44.749 CC lib/iscsi/param.o 00:04:44.749 CC lib/iscsi/portal_grp.o 00:04:44.749 CC lib/ftl/ftl_reloc.o 00:04:45.007 CC lib/ftl/ftl_l2p_cache.o 00:04:45.007 CC lib/ftl/ftl_p2l.o 00:04:45.007 CC lib/iscsi/tgt_node.o 00:04:45.007 CC lib/iscsi/iscsi_subsystem.o 00:04:45.265 CC lib/iscsi/iscsi_rpc.o 00:04:45.265 CC lib/ftl/ftl_p2l_log.o 00:04:45.265 CC lib/ftl/mngt/ftl_mngt.o 00:04:45.265 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:45.265 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:45.265 LIB libspdk_vhost.a 00:04:45.524 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:45.524 SO libspdk_vhost.so.8.0 00:04:45.524 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:45.524 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:45.524 SYMLINK libspdk_vhost.so 00:04:45.524 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:45.524 CC lib/iscsi/task.o 00:04:45.524 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:45.524 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:45.524 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:45.524 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:45.524 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:45.524 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:45.783 CC lib/ftl/utils/ftl_conf.o 00:04:45.783 CC lib/ftl/utils/ftl_md.o 00:04:45.783 CC lib/ftl/utils/ftl_mempool.o 00:04:45.783 CC lib/ftl/utils/ftl_bitmap.o 00:04:45.783 CC lib/ftl/utils/ftl_property.o 00:04:45.783 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:45.783 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:45.783 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:45.783 LIB libspdk_iscsi.a 00:04:45.783 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:45.783 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:45.783 LIB libspdk_nvmf.a 00:04:45.783 SO libspdk_iscsi.so.8.0 00:04:46.044 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:46.044 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:46.044 SO libspdk_nvmf.so.19.0 00:04:46.044 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:46.044 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:46.044 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:46.044 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:46.044 SYMLINK libspdk_iscsi.so 00:04:46.044 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:46.044 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:46.044 CC lib/ftl/base/ftl_base_dev.o 00:04:46.044 CC lib/ftl/base/ftl_base_bdev.o 00:04:46.044 CC lib/ftl/ftl_trace.o 00:04:46.304 SYMLINK libspdk_nvmf.so 00:04:46.304 LIB libspdk_ftl.a 00:04:46.566 SO libspdk_ftl.so.9.0 00:04:46.826 SYMLINK libspdk_ftl.so 00:04:47.084 CC module/env_dpdk/env_dpdk_rpc.o 00:04:47.084 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:47.084 CC module/blob/bdev/blob_bdev.o 00:04:47.084 CC module/accel/error/accel_error.o 00:04:47.084 CC module/accel/ioat/accel_ioat.o 00:04:47.084 CC module/keyring/file/keyring.o 00:04:47.084 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:47.084 CC module/sock/posix/posix.o 00:04:47.084 CC module/scheduler/gscheduler/gscheduler.o 00:04:47.084 CC module/fsdev/aio/fsdev_aio.o 00:04:47.084 LIB libspdk_env_dpdk_rpc.a 00:04:47.084 SO libspdk_env_dpdk_rpc.so.6.0 00:04:47.084 SYMLINK libspdk_env_dpdk_rpc.so 00:04:47.084 CC module/accel/ioat/accel_ioat_rpc.o 00:04:47.084 CC module/keyring/file/keyring_rpc.o 00:04:47.084 LIB libspdk_scheduler_dpdk_governor.a 00:04:47.084 LIB libspdk_scheduler_gscheduler.a 00:04:47.084 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:47.084 SO libspdk_scheduler_gscheduler.so.4.0 00:04:47.084 CC module/accel/error/accel_error_rpc.o 00:04:47.084 LIB libspdk_scheduler_dynamic.a 00:04:47.343 LIB libspdk_accel_ioat.a 00:04:47.343 SO libspdk_scheduler_dynamic.so.4.0 00:04:47.343 LIB libspdk_blob_bdev.a 00:04:47.343 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:47.343 SO libspdk_accel_ioat.so.6.0 00:04:47.343 SO libspdk_blob_bdev.so.11.0 00:04:47.343 SYMLINK libspdk_scheduler_gscheduler.so 00:04:47.343 LIB libspdk_keyring_file.a 00:04:47.343 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:47.343 SYMLINK libspdk_scheduler_dynamic.so 00:04:47.343 CC module/fsdev/aio/linux_aio_mgr.o 00:04:47.343 CC module/accel/dsa/accel_dsa.o 00:04:47.343 SO libspdk_keyring_file.so.2.0 00:04:47.343 SYMLINK libspdk_accel_ioat.so 00:04:47.343 SYMLINK libspdk_blob_bdev.so 00:04:47.343 LIB libspdk_accel_error.a 00:04:47.343 SO libspdk_accel_error.so.2.0 00:04:47.343 SYMLINK libspdk_keyring_file.so 00:04:47.343 CC module/accel/dsa/accel_dsa_rpc.o 00:04:47.343 CC module/accel/iaa/accel_iaa.o 00:04:47.343 SYMLINK libspdk_accel_error.so 00:04:47.343 CC module/accel/iaa/accel_iaa_rpc.o 00:04:47.602 CC module/keyring/linux/keyring.o 00:04:47.602 CC module/keyring/linux/keyring_rpc.o 00:04:47.602 CC module/bdev/delay/vbdev_delay.o 00:04:47.602 LIB libspdk_fsdev_aio.a 00:04:47.602 CC module/bdev/error/vbdev_error.o 00:04:47.602 LIB libspdk_accel_dsa.a 00:04:47.602 CC module/blobfs/bdev/blobfs_bdev.o 00:04:47.602 SO libspdk_fsdev_aio.so.1.0 00:04:47.602 LIB libspdk_accel_iaa.a 00:04:47.602 SO libspdk_accel_dsa.so.5.0 00:04:47.602 LIB libspdk_keyring_linux.a 00:04:47.602 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:47.602 SO libspdk_accel_iaa.so.3.0 00:04:47.602 SO libspdk_keyring_linux.so.1.0 00:04:47.602 SYMLINK libspdk_fsdev_aio.so 00:04:47.602 CC module/bdev/gpt/gpt.o 00:04:47.602 SYMLINK libspdk_accel_dsa.so 00:04:47.602 SYMLINK libspdk_accel_iaa.so 00:04:47.602 CC module/bdev/gpt/vbdev_gpt.o 00:04:47.602 SYMLINK libspdk_keyring_linux.so 00:04:47.602 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:47.602 CC module/bdev/error/vbdev_error_rpc.o 00:04:47.861 LIB libspdk_blobfs_bdev.a 00:04:47.861 CC module/bdev/lvol/vbdev_lvol.o 00:04:47.861 CC module/bdev/malloc/bdev_malloc.o 00:04:47.861 SO libspdk_blobfs_bdev.so.6.0 00:04:47.861 LIB libspdk_sock_posix.a 00:04:47.861 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:47.861 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:47.861 SO libspdk_sock_posix.so.6.0 00:04:47.861 SYMLINK libspdk_blobfs_bdev.so 00:04:47.861 LIB libspdk_bdev_error.a 00:04:47.861 LIB libspdk_bdev_gpt.a 00:04:47.861 LIB libspdk_bdev_delay.a 00:04:47.861 SYMLINK libspdk_sock_posix.so 00:04:47.861 SO libspdk_bdev_error.so.6.0 00:04:47.861 SO libspdk_bdev_gpt.so.6.0 00:04:47.861 SO libspdk_bdev_delay.so.6.0 00:04:47.861 SYMLINK libspdk_bdev_error.so 00:04:47.861 CC module/bdev/nvme/bdev_nvme.o 00:04:47.861 SYMLINK libspdk_bdev_delay.so 00:04:47.861 SYMLINK libspdk_bdev_gpt.so 00:04:47.861 CC module/bdev/null/bdev_null.o 00:04:48.119 CC module/bdev/passthru/vbdev_passthru.o 00:04:48.119 CC module/bdev/raid/bdev_raid.o 00:04:48.119 CC module/bdev/split/vbdev_split.o 00:04:48.119 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:48.119 CC module/bdev/xnvme/bdev_xnvme.o 00:04:48.119 CC module/bdev/split/vbdev_split_rpc.o 00:04:48.119 LIB libspdk_bdev_malloc.a 00:04:48.119 SO libspdk_bdev_malloc.so.6.0 00:04:48.119 SYMLINK libspdk_bdev_malloc.so 00:04:48.119 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:48.119 LIB libspdk_bdev_lvol.a 00:04:48.119 CC module/bdev/null/bdev_null_rpc.o 00:04:48.119 SO libspdk_bdev_lvol.so.6.0 00:04:48.377 LIB libspdk_bdev_split.a 00:04:48.377 SYMLINK libspdk_bdev_lvol.so 00:04:48.377 SO libspdk_bdev_split.so.6.0 00:04:48.377 CC module/bdev/raid/bdev_raid_rpc.o 00:04:48.377 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:04:48.377 CC module/bdev/raid/bdev_raid_sb.o 00:04:48.377 SYMLINK libspdk_bdev_split.so 00:04:48.377 LIB libspdk_bdev_passthru.a 00:04:48.377 SO libspdk_bdev_passthru.so.6.0 00:04:48.377 LIB libspdk_bdev_null.a 00:04:48.377 CC module/bdev/aio/bdev_aio.o 00:04:48.377 SO libspdk_bdev_null.so.6.0 00:04:48.378 LIB libspdk_bdev_xnvme.a 00:04:48.378 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:48.378 SYMLINK libspdk_bdev_passthru.so 00:04:48.378 SO libspdk_bdev_xnvme.so.3.0 00:04:48.378 CC module/bdev/ftl/bdev_ftl.o 00:04:48.378 SYMLINK libspdk_bdev_null.so 00:04:48.378 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:48.378 SYMLINK libspdk_bdev_xnvme.so 00:04:48.378 CC module/bdev/raid/raid0.o 00:04:48.635 CC module/bdev/raid/raid1.o 00:04:48.635 LIB libspdk_bdev_zone_block.a 00:04:48.635 CC module/bdev/iscsi/bdev_iscsi.o 00:04:48.635 SO libspdk_bdev_zone_block.so.6.0 00:04:48.635 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:48.635 SYMLINK libspdk_bdev_zone_block.so 00:04:48.635 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:48.635 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:48.635 CC module/bdev/raid/concat.o 00:04:48.635 CC module/bdev/aio/bdev_aio_rpc.o 00:04:48.635 LIB libspdk_bdev_ftl.a 00:04:48.636 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:48.636 SO libspdk_bdev_ftl.so.6.0 00:04:48.894 CC module/bdev/nvme/nvme_rpc.o 00:04:48.894 SYMLINK libspdk_bdev_ftl.so 00:04:48.894 CC module/bdev/nvme/bdev_mdns_client.o 00:04:48.894 LIB libspdk_bdev_aio.a 00:04:48.894 CC module/bdev/nvme/vbdev_opal.o 00:04:48.894 SO libspdk_bdev_aio.so.6.0 00:04:48.894 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:48.894 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:48.894 LIB libspdk_bdev_raid.a 00:04:48.894 SYMLINK libspdk_bdev_aio.so 00:04:48.894 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:48.894 LIB libspdk_bdev_virtio.a 00:04:48.894 SO libspdk_bdev_raid.so.6.0 00:04:48.894 SO libspdk_bdev_virtio.so.6.0 00:04:48.894 SYMLINK libspdk_bdev_virtio.so 00:04:49.152 SYMLINK libspdk_bdev_raid.so 00:04:49.152 LIB libspdk_bdev_iscsi.a 00:04:49.152 SO libspdk_bdev_iscsi.so.6.0 00:04:49.152 SYMLINK libspdk_bdev_iscsi.so 00:04:49.719 LIB libspdk_bdev_nvme.a 00:04:49.977 SO libspdk_bdev_nvme.so.7.0 00:04:49.977 SYMLINK libspdk_bdev_nvme.so 00:04:50.236 CC module/event/subsystems/vmd/vmd.o 00:04:50.236 CC module/event/subsystems/scheduler/scheduler.o 00:04:50.236 CC module/event/subsystems/iobuf/iobuf.o 00:04:50.236 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:50.236 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:50.236 CC module/event/subsystems/fsdev/fsdev.o 00:04:50.236 CC module/event/subsystems/keyring/keyring.o 00:04:50.236 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:50.236 CC module/event/subsystems/sock/sock.o 00:04:50.494 LIB libspdk_event_fsdev.a 00:04:50.494 LIB libspdk_event_scheduler.a 00:04:50.494 LIB libspdk_event_keyring.a 00:04:50.494 SO libspdk_event_fsdev.so.1.0 00:04:50.494 SO libspdk_event_scheduler.so.4.0 00:04:50.494 LIB libspdk_event_vhost_blk.a 00:04:50.494 LIB libspdk_event_vmd.a 00:04:50.494 SO libspdk_event_keyring.so.1.0 00:04:50.494 LIB libspdk_event_sock.a 00:04:50.494 LIB libspdk_event_iobuf.a 00:04:50.494 SO libspdk_event_vhost_blk.so.3.0 00:04:50.494 SO libspdk_event_vmd.so.6.0 00:04:50.494 SO libspdk_event_sock.so.5.0 00:04:50.494 SYMLINK libspdk_event_scheduler.so 00:04:50.494 SYMLINK libspdk_event_fsdev.so 00:04:50.494 SO libspdk_event_iobuf.so.3.0 00:04:50.494 SYMLINK libspdk_event_keyring.so 00:04:50.494 SYMLINK libspdk_event_vhost_blk.so 00:04:50.494 SYMLINK libspdk_event_sock.so 00:04:50.494 SYMLINK libspdk_event_vmd.so 00:04:50.494 SYMLINK libspdk_event_iobuf.so 00:04:50.752 CC module/event/subsystems/accel/accel.o 00:04:50.752 LIB libspdk_event_accel.a 00:04:51.010 SO libspdk_event_accel.so.6.0 00:04:51.010 SYMLINK libspdk_event_accel.so 00:04:51.268 CC module/event/subsystems/bdev/bdev.o 00:04:51.268 LIB libspdk_event_bdev.a 00:04:51.268 SO libspdk_event_bdev.so.6.0 00:04:51.268 SYMLINK libspdk_event_bdev.so 00:04:51.527 CC module/event/subsystems/nbd/nbd.o 00:04:51.527 CC module/event/subsystems/scsi/scsi.o 00:04:51.527 CC module/event/subsystems/ublk/ublk.o 00:04:51.527 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:51.527 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:51.527 LIB libspdk_event_nbd.a 00:04:51.527 LIB libspdk_event_ublk.a 00:04:51.527 LIB libspdk_event_scsi.a 00:04:51.527 SO libspdk_event_nbd.so.6.0 00:04:51.527 SO libspdk_event_ublk.so.3.0 00:04:51.785 SO libspdk_event_scsi.so.6.0 00:04:51.785 SYMLINK libspdk_event_nbd.so 00:04:51.785 SYMLINK libspdk_event_ublk.so 00:04:51.785 SYMLINK libspdk_event_scsi.so 00:04:51.785 LIB libspdk_event_nvmf.a 00:04:51.785 SO libspdk_event_nvmf.so.6.0 00:04:51.785 SYMLINK libspdk_event_nvmf.so 00:04:51.785 CC module/event/subsystems/iscsi/iscsi.o 00:04:51.785 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:52.043 LIB libspdk_event_vhost_scsi.a 00:04:52.043 LIB libspdk_event_iscsi.a 00:04:52.043 SO libspdk_event_vhost_scsi.so.3.0 00:04:52.043 SO libspdk_event_iscsi.so.6.0 00:04:52.043 SYMLINK libspdk_event_vhost_scsi.so 00:04:52.043 SYMLINK libspdk_event_iscsi.so 00:04:52.302 SO libspdk.so.6.0 00:04:52.302 SYMLINK libspdk.so 00:04:52.302 CC app/trace_record/trace_record.o 00:04:52.302 CXX app/trace/trace.o 00:04:52.302 TEST_HEADER include/spdk/accel.h 00:04:52.302 TEST_HEADER include/spdk/accel_module.h 00:04:52.302 TEST_HEADER include/spdk/assert.h 00:04:52.302 TEST_HEADER include/spdk/barrier.h 00:04:52.302 TEST_HEADER include/spdk/base64.h 00:04:52.302 TEST_HEADER include/spdk/bdev.h 00:04:52.302 TEST_HEADER include/spdk/bdev_module.h 00:04:52.302 TEST_HEADER include/spdk/bdev_zone.h 00:04:52.302 TEST_HEADER include/spdk/bit_array.h 00:04:52.302 TEST_HEADER include/spdk/bit_pool.h 00:04:52.302 TEST_HEADER include/spdk/blob_bdev.h 00:04:52.302 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:52.302 TEST_HEADER include/spdk/blobfs.h 00:04:52.302 CC app/nvmf_tgt/nvmf_main.o 00:04:52.302 TEST_HEADER include/spdk/blob.h 00:04:52.302 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:52.302 TEST_HEADER include/spdk/conf.h 00:04:52.302 TEST_HEADER include/spdk/config.h 00:04:52.302 TEST_HEADER include/spdk/cpuset.h 00:04:52.302 TEST_HEADER include/spdk/crc16.h 00:04:52.302 TEST_HEADER include/spdk/crc32.h 00:04:52.302 TEST_HEADER include/spdk/crc64.h 00:04:52.302 TEST_HEADER include/spdk/dif.h 00:04:52.302 TEST_HEADER include/spdk/dma.h 00:04:52.302 TEST_HEADER include/spdk/endian.h 00:04:52.302 TEST_HEADER include/spdk/env_dpdk.h 00:04:52.302 TEST_HEADER include/spdk/env.h 00:04:52.302 TEST_HEADER include/spdk/event.h 00:04:52.302 TEST_HEADER include/spdk/fd_group.h 00:04:52.302 TEST_HEADER include/spdk/fd.h 00:04:52.302 TEST_HEADER include/spdk/file.h 00:04:52.302 TEST_HEADER include/spdk/fsdev.h 00:04:52.302 TEST_HEADER include/spdk/fsdev_module.h 00:04:52.302 TEST_HEADER include/spdk/ftl.h 00:04:52.302 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:52.302 TEST_HEADER include/spdk/gpt_spec.h 00:04:52.302 TEST_HEADER include/spdk/hexlify.h 00:04:52.302 TEST_HEADER include/spdk/histogram_data.h 00:04:52.302 TEST_HEADER include/spdk/idxd.h 00:04:52.560 TEST_HEADER include/spdk/idxd_spec.h 00:04:52.560 TEST_HEADER include/spdk/init.h 00:04:52.560 TEST_HEADER include/spdk/ioat.h 00:04:52.560 TEST_HEADER include/spdk/ioat_spec.h 00:04:52.560 CC examples/ioat/perf/perf.o 00:04:52.560 TEST_HEADER include/spdk/iscsi_spec.h 00:04:52.560 TEST_HEADER include/spdk/json.h 00:04:52.560 TEST_HEADER include/spdk/jsonrpc.h 00:04:52.560 CC examples/util/zipf/zipf.o 00:04:52.560 TEST_HEADER include/spdk/keyring.h 00:04:52.560 TEST_HEADER include/spdk/keyring_module.h 00:04:52.560 TEST_HEADER include/spdk/likely.h 00:04:52.560 TEST_HEADER include/spdk/log.h 00:04:52.560 TEST_HEADER include/spdk/lvol.h 00:04:52.560 TEST_HEADER include/spdk/md5.h 00:04:52.560 CC test/thread/poller_perf/poller_perf.o 00:04:52.560 TEST_HEADER include/spdk/memory.h 00:04:52.560 TEST_HEADER include/spdk/mmio.h 00:04:52.560 TEST_HEADER include/spdk/nbd.h 00:04:52.560 CC test/app/bdev_svc/bdev_svc.o 00:04:52.560 TEST_HEADER include/spdk/net.h 00:04:52.560 TEST_HEADER include/spdk/notify.h 00:04:52.560 TEST_HEADER include/spdk/nvme.h 00:04:52.560 TEST_HEADER include/spdk/nvme_intel.h 00:04:52.560 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:52.560 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:52.560 CC test/dma/test_dma/test_dma.o 00:04:52.560 TEST_HEADER include/spdk/nvme_spec.h 00:04:52.560 TEST_HEADER include/spdk/nvme_zns.h 00:04:52.560 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:52.560 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:52.560 TEST_HEADER include/spdk/nvmf.h 00:04:52.560 TEST_HEADER include/spdk/nvmf_spec.h 00:04:52.560 TEST_HEADER include/spdk/nvmf_transport.h 00:04:52.561 TEST_HEADER include/spdk/opal.h 00:04:52.561 TEST_HEADER include/spdk/opal_spec.h 00:04:52.561 TEST_HEADER include/spdk/pci_ids.h 00:04:52.561 TEST_HEADER include/spdk/pipe.h 00:04:52.561 TEST_HEADER include/spdk/queue.h 00:04:52.561 TEST_HEADER include/spdk/reduce.h 00:04:52.561 TEST_HEADER include/spdk/rpc.h 00:04:52.561 TEST_HEADER include/spdk/scheduler.h 00:04:52.561 TEST_HEADER include/spdk/scsi.h 00:04:52.561 TEST_HEADER include/spdk/scsi_spec.h 00:04:52.561 TEST_HEADER include/spdk/sock.h 00:04:52.561 TEST_HEADER include/spdk/stdinc.h 00:04:52.561 TEST_HEADER include/spdk/string.h 00:04:52.561 TEST_HEADER include/spdk/thread.h 00:04:52.561 TEST_HEADER include/spdk/trace.h 00:04:52.561 LINK nvmf_tgt 00:04:52.561 TEST_HEADER include/spdk/trace_parser.h 00:04:52.561 TEST_HEADER include/spdk/tree.h 00:04:52.561 TEST_HEADER include/spdk/ublk.h 00:04:52.561 TEST_HEADER include/spdk/util.h 00:04:52.561 TEST_HEADER include/spdk/uuid.h 00:04:52.561 TEST_HEADER include/spdk/version.h 00:04:52.561 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:52.561 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:52.561 TEST_HEADER include/spdk/vhost.h 00:04:52.561 TEST_HEADER include/spdk/vmd.h 00:04:52.561 TEST_HEADER include/spdk/xor.h 00:04:52.561 TEST_HEADER include/spdk/zipf.h 00:04:52.561 CXX test/cpp_headers/accel.o 00:04:52.561 LINK interrupt_tgt 00:04:52.561 LINK poller_perf 00:04:52.561 LINK zipf 00:04:52.561 LINK bdev_svc 00:04:52.561 LINK ioat_perf 00:04:52.561 LINK spdk_trace_record 00:04:52.561 LINK spdk_trace 00:04:52.819 CXX test/cpp_headers/accel_module.o 00:04:52.819 CXX test/cpp_headers/assert.o 00:04:52.819 CXX test/cpp_headers/barrier.o 00:04:52.819 CC examples/ioat/verify/verify.o 00:04:52.819 CC test/app/histogram_perf/histogram_perf.o 00:04:52.819 CC app/iscsi_tgt/iscsi_tgt.o 00:04:52.819 CC test/app/jsoncat/jsoncat.o 00:04:52.819 CXX test/cpp_headers/base64.o 00:04:52.819 CC app/spdk_tgt/spdk_tgt.o 00:04:52.819 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:52.819 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:52.819 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:52.819 LINK test_dma 00:04:53.077 LINK histogram_perf 00:04:53.077 LINK jsoncat 00:04:53.077 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:53.077 CXX test/cpp_headers/bdev.o 00:04:53.077 LINK iscsi_tgt 00:04:53.077 LINK verify 00:04:53.077 CXX test/cpp_headers/bdev_module.o 00:04:53.077 LINK spdk_tgt 00:04:53.077 CXX test/cpp_headers/bdev_zone.o 00:04:53.077 CC test/app/stub/stub.o 00:04:53.077 CXX test/cpp_headers/bit_array.o 00:04:53.077 CXX test/cpp_headers/bit_pool.o 00:04:53.077 LINK nvme_fuzz 00:04:53.077 CXX test/cpp_headers/blob_bdev.o 00:04:53.348 CXX test/cpp_headers/blobfs_bdev.o 00:04:53.348 LINK stub 00:04:53.348 LINK vhost_fuzz 00:04:53.348 CC examples/thread/thread/thread_ex.o 00:04:53.348 CXX test/cpp_headers/blobfs.o 00:04:53.348 CC app/spdk_lspci/spdk_lspci.o 00:04:53.348 CXX test/cpp_headers/blob.o 00:04:53.348 CXX test/cpp_headers/conf.o 00:04:53.348 CC app/spdk_nvme_perf/perf.o 00:04:53.348 LINK spdk_lspci 00:04:53.348 CC test/env/vtophys/vtophys.o 00:04:53.647 CXX test/cpp_headers/config.o 00:04:53.647 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:53.647 CXX test/cpp_headers/cpuset.o 00:04:53.647 CC test/env/memory/memory_ut.o 00:04:53.647 CXX test/cpp_headers/crc16.o 00:04:53.647 CC test/env/mem_callbacks/mem_callbacks.o 00:04:53.647 LINK thread 00:04:53.647 CC test/env/pci/pci_ut.o 00:04:53.647 LINK vtophys 00:04:53.647 LINK env_dpdk_post_init 00:04:53.647 CXX test/cpp_headers/crc32.o 00:04:53.647 CXX test/cpp_headers/crc64.o 00:04:53.647 CXX test/cpp_headers/dif.o 00:04:53.647 CC app/spdk_nvme_identify/identify.o 00:04:53.904 CC examples/sock/hello_world/hello_sock.o 00:04:53.904 CXX test/cpp_headers/dma.o 00:04:53.904 LINK mem_callbacks 00:04:53.904 CC examples/vmd/lsvmd/lsvmd.o 00:04:53.904 CC examples/idxd/perf/perf.o 00:04:53.904 LINK pci_ut 00:04:53.904 CXX test/cpp_headers/endian.o 00:04:53.904 LINK lsvmd 00:04:53.904 LINK spdk_nvme_perf 00:04:53.904 LINK hello_sock 00:04:54.162 CXX test/cpp_headers/env_dpdk.o 00:04:54.162 CC examples/vmd/led/led.o 00:04:54.162 CXX test/cpp_headers/env.o 00:04:54.162 CC test/event/event_perf/event_perf.o 00:04:54.162 CXX test/cpp_headers/event.o 00:04:54.162 LINK iscsi_fuzz 00:04:54.162 LINK idxd_perf 00:04:54.162 LINK led 00:04:54.162 CXX test/cpp_headers/fd_group.o 00:04:54.162 CC test/event/reactor/reactor.o 00:04:54.162 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:54.162 LINK event_perf 00:04:54.420 LINK memory_ut 00:04:54.420 CC test/event/reactor_perf/reactor_perf.o 00:04:54.420 CC test/event/app_repeat/app_repeat.o 00:04:54.420 CXX test/cpp_headers/fd.o 00:04:54.420 LINK reactor 00:04:54.420 CXX test/cpp_headers/file.o 00:04:54.420 CC app/spdk_nvme_discover/discovery_aer.o 00:04:54.420 LINK reactor_perf 00:04:54.420 CC test/event/scheduler/scheduler.o 00:04:54.420 CC test/nvme/aer/aer.o 00:04:54.420 LINK app_repeat 00:04:54.420 LINK hello_fsdev 00:04:54.677 CXX test/cpp_headers/fsdev.o 00:04:54.677 LINK spdk_nvme_identify 00:04:54.677 CC test/nvme/reset/reset.o 00:04:54.677 CC test/nvme/sgl/sgl.o 00:04:54.677 CC test/nvme/e2edp/nvme_dp.o 00:04:54.677 LINK spdk_nvme_discover 00:04:54.677 CXX test/cpp_headers/fsdev_module.o 00:04:54.677 CXX test/cpp_headers/ftl.o 00:04:54.677 LINK scheduler 00:04:54.677 CC test/nvme/overhead/overhead.o 00:04:54.677 LINK aer 00:04:54.936 CC examples/accel/perf/accel_perf.o 00:04:54.936 LINK reset 00:04:54.936 CC app/spdk_top/spdk_top.o 00:04:54.936 CXX test/cpp_headers/fuse_dispatcher.o 00:04:54.936 LINK sgl 00:04:54.936 LINK nvme_dp 00:04:54.936 CC test/rpc_client/rpc_client_test.o 00:04:54.936 CC examples/blob/hello_world/hello_blob.o 00:04:54.936 LINK overhead 00:04:54.936 CXX test/cpp_headers/gpt_spec.o 00:04:54.936 CC examples/nvme/hello_world/hello_world.o 00:04:54.936 CC examples/nvme/reconnect/reconnect.o 00:04:54.936 CC app/vhost/vhost.o 00:04:55.194 LINK rpc_client_test 00:04:55.194 CXX test/cpp_headers/hexlify.o 00:04:55.194 LINK hello_blob 00:04:55.194 LINK hello_world 00:04:55.194 CC test/accel/dif/dif.o 00:04:55.194 CC test/nvme/err_injection/err_injection.o 00:04:55.194 LINK vhost 00:04:55.194 CXX test/cpp_headers/histogram_data.o 00:04:55.194 LINK accel_perf 00:04:55.453 CC test/nvme/startup/startup.o 00:04:55.453 CC test/nvme/reserve/reserve.o 00:04:55.453 CXX test/cpp_headers/idxd.o 00:04:55.453 LINK err_injection 00:04:55.453 LINK reconnect 00:04:55.453 CC test/nvme/simple_copy/simple_copy.o 00:04:55.453 CC examples/blob/cli/blobcli.o 00:04:55.453 LINK startup 00:04:55.453 LINK reserve 00:04:55.453 CC test/nvme/connect_stress/connect_stress.o 00:04:55.453 CXX test/cpp_headers/idxd_spec.o 00:04:55.453 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:55.712 CXX test/cpp_headers/init.o 00:04:55.712 LINK simple_copy 00:04:55.712 CXX test/cpp_headers/ioat.o 00:04:55.712 CC examples/bdev/hello_world/hello_bdev.o 00:04:55.712 LINK connect_stress 00:04:55.712 CC test/nvme/boot_partition/boot_partition.o 00:04:55.712 LINK dif 00:04:55.712 LINK spdk_top 00:04:55.712 CXX test/cpp_headers/ioat_spec.o 00:04:55.712 LINK blobcli 00:04:55.712 CC test/nvme/compliance/nvme_compliance.o 00:04:55.712 LINK boot_partition 00:04:55.712 CC examples/nvme/arbitration/arbitration.o 00:04:55.970 CC examples/nvme/hotplug/hotplug.o 00:04:55.970 LINK hello_bdev 00:04:55.970 CXX test/cpp_headers/iscsi_spec.o 00:04:55.970 CXX test/cpp_headers/json.o 00:04:55.970 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:55.970 CXX test/cpp_headers/jsonrpc.o 00:04:55.970 CC app/spdk_dd/spdk_dd.o 00:04:55.970 LINK hotplug 00:04:55.970 LINK arbitration 00:04:55.970 LINK nvme_manage 00:04:55.970 CXX test/cpp_headers/keyring.o 00:04:55.970 CC examples/nvme/abort/abort.o 00:04:55.970 LINK cmb_copy 00:04:56.232 LINK nvme_compliance 00:04:56.232 CC test/nvme/fused_ordering/fused_ordering.o 00:04:56.232 CXX test/cpp_headers/keyring_module.o 00:04:56.232 CC examples/bdev/bdevperf/bdevperf.o 00:04:56.232 CXX test/cpp_headers/likely.o 00:04:56.232 CXX test/cpp_headers/log.o 00:04:56.232 CXX test/cpp_headers/lvol.o 00:04:56.232 CXX test/cpp_headers/md5.o 00:04:56.232 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:56.232 CXX test/cpp_headers/memory.o 00:04:56.232 LINK fused_ordering 00:04:56.232 CXX test/cpp_headers/mmio.o 00:04:56.232 CC test/nvme/cuse/cuse.o 00:04:56.232 LINK spdk_dd 00:04:56.232 CC test/nvme/fdp/fdp.o 00:04:56.493 CXX test/cpp_headers/nbd.o 00:04:56.493 CXX test/cpp_headers/net.o 00:04:56.493 LINK doorbell_aers 00:04:56.493 LINK abort 00:04:56.493 CXX test/cpp_headers/notify.o 00:04:56.493 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:56.493 CXX test/cpp_headers/nvme.o 00:04:56.493 CXX test/cpp_headers/nvme_intel.o 00:04:56.493 CC test/blobfs/mkfs/mkfs.o 00:04:56.493 CXX test/cpp_headers/nvme_ocssd.o 00:04:56.493 LINK fdp 00:04:56.751 CC app/fio/nvme/fio_plugin.o 00:04:56.751 LINK pmr_persistence 00:04:56.751 CC app/fio/bdev/fio_plugin.o 00:04:56.751 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:56.751 LINK mkfs 00:04:56.751 CXX test/cpp_headers/nvme_spec.o 00:04:56.751 CXX test/cpp_headers/nvme_zns.o 00:04:56.751 CXX test/cpp_headers/nvmf_cmd.o 00:04:56.751 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:56.751 CC test/lvol/esnap/esnap.o 00:04:57.008 CXX test/cpp_headers/nvmf.o 00:04:57.008 CXX test/cpp_headers/nvmf_spec.o 00:04:57.008 CC test/bdev/bdevio/bdevio.o 00:04:57.008 LINK bdevperf 00:04:57.008 CXX test/cpp_headers/nvmf_transport.o 00:04:57.008 CXX test/cpp_headers/opal.o 00:04:57.008 CXX test/cpp_headers/opal_spec.o 00:04:57.008 LINK spdk_nvme 00:04:57.008 CXX test/cpp_headers/pci_ids.o 00:04:57.008 CXX test/cpp_headers/pipe.o 00:04:57.008 CXX test/cpp_headers/queue.o 00:04:57.008 CXX test/cpp_headers/reduce.o 00:04:57.008 LINK spdk_bdev 00:04:57.008 CXX test/cpp_headers/rpc.o 00:04:57.266 CXX test/cpp_headers/scheduler.o 00:04:57.266 CXX test/cpp_headers/scsi.o 00:04:57.266 CXX test/cpp_headers/scsi_spec.o 00:04:57.266 CXX test/cpp_headers/sock.o 00:04:57.266 CXX test/cpp_headers/stdinc.o 00:04:57.266 CXX test/cpp_headers/string.o 00:04:57.266 CXX test/cpp_headers/thread.o 00:04:57.266 CC examples/nvmf/nvmf/nvmf.o 00:04:57.266 CXX test/cpp_headers/trace.o 00:04:57.266 LINK bdevio 00:04:57.267 CXX test/cpp_headers/trace_parser.o 00:04:57.267 CXX test/cpp_headers/tree.o 00:04:57.267 CXX test/cpp_headers/ublk.o 00:04:57.267 CXX test/cpp_headers/util.o 00:04:57.574 CXX test/cpp_headers/uuid.o 00:04:57.574 CXX test/cpp_headers/version.o 00:04:57.574 CXX test/cpp_headers/vfio_user_pci.o 00:04:57.574 CXX test/cpp_headers/vfio_user_spec.o 00:04:57.574 CXX test/cpp_headers/vhost.o 00:04:57.574 CXX test/cpp_headers/vmd.o 00:04:57.574 CXX test/cpp_headers/xor.o 00:04:57.574 CXX test/cpp_headers/zipf.o 00:04:57.574 LINK cuse 00:04:57.574 LINK nvmf 00:05:01.792 LINK esnap 00:05:01.792 00:05:01.792 real 1m2.453s 00:05:01.792 user 5m6.312s 00:05:01.792 sys 0m49.326s 00:05:01.792 05:06:54 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:05:01.792 05:06:54 make -- common/autotest_common.sh@10 -- $ set +x 00:05:01.792 ************************************ 00:05:01.792 END TEST make 00:05:01.792 ************************************ 00:05:01.792 05:06:55 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:01.792 05:06:55 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:01.792 05:06:55 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:01.792 05:06:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:01.792 05:06:55 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:05:01.792 05:06:55 -- pm/common@44 -- $ pid=5796 00:05:01.792 05:06:55 -- pm/common@50 -- $ kill -TERM 5796 00:05:01.792 05:06:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:01.792 05:06:55 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:05:01.792 05:06:55 -- pm/common@44 -- $ pid=5797 00:05:01.792 05:06:55 -- pm/common@50 -- $ kill -TERM 5797 00:05:02.054 05:06:55 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:02.054 05:06:55 -- common/autotest_common.sh@1681 -- # lcov --version 00:05:02.054 05:06:55 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:02.054 05:06:55 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:02.054 05:06:55 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:02.054 05:06:55 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:02.054 05:06:55 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:02.054 05:06:55 -- scripts/common.sh@336 -- # IFS=.-: 00:05:02.054 05:06:55 -- scripts/common.sh@336 -- # read -ra ver1 00:05:02.054 05:06:55 -- scripts/common.sh@337 -- # IFS=.-: 00:05:02.054 05:06:55 -- scripts/common.sh@337 -- # read -ra ver2 00:05:02.054 05:06:55 -- scripts/common.sh@338 -- # local 'op=<' 00:05:02.054 05:06:55 -- scripts/common.sh@340 -- # ver1_l=2 00:05:02.054 05:06:55 -- scripts/common.sh@341 -- # ver2_l=1 00:05:02.054 05:06:55 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:02.054 05:06:55 -- scripts/common.sh@344 -- # case "$op" in 00:05:02.054 05:06:55 -- scripts/common.sh@345 -- # : 1 00:05:02.054 05:06:55 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:02.054 05:06:55 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:02.054 05:06:55 -- scripts/common.sh@365 -- # decimal 1 00:05:02.054 05:06:55 -- scripts/common.sh@353 -- # local d=1 00:05:02.054 05:06:55 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:02.054 05:06:55 -- scripts/common.sh@355 -- # echo 1 00:05:02.054 05:06:55 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:02.054 05:06:55 -- scripts/common.sh@366 -- # decimal 2 00:05:02.054 05:06:55 -- scripts/common.sh@353 -- # local d=2 00:05:02.054 05:06:55 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:02.054 05:06:55 -- scripts/common.sh@355 -- # echo 2 00:05:02.054 05:06:55 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:02.054 05:06:55 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:02.054 05:06:55 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:02.054 05:06:55 -- scripts/common.sh@368 -- # return 0 00:05:02.054 05:06:55 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:02.054 05:06:55 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:02.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.054 --rc genhtml_branch_coverage=1 00:05:02.054 --rc genhtml_function_coverage=1 00:05:02.054 --rc genhtml_legend=1 00:05:02.054 --rc geninfo_all_blocks=1 00:05:02.055 --rc geninfo_unexecuted_blocks=1 00:05:02.055 00:05:02.055 ' 00:05:02.055 05:06:55 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:02.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.055 --rc genhtml_branch_coverage=1 00:05:02.055 --rc genhtml_function_coverage=1 00:05:02.055 --rc genhtml_legend=1 00:05:02.055 --rc geninfo_all_blocks=1 00:05:02.055 --rc geninfo_unexecuted_blocks=1 00:05:02.055 00:05:02.055 ' 00:05:02.055 05:06:55 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:02.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.055 --rc genhtml_branch_coverage=1 00:05:02.055 --rc genhtml_function_coverage=1 00:05:02.055 --rc genhtml_legend=1 00:05:02.055 --rc geninfo_all_blocks=1 00:05:02.055 --rc geninfo_unexecuted_blocks=1 00:05:02.055 00:05:02.055 ' 00:05:02.055 05:06:55 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:02.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.055 --rc genhtml_branch_coverage=1 00:05:02.055 --rc genhtml_function_coverage=1 00:05:02.055 --rc genhtml_legend=1 00:05:02.055 --rc geninfo_all_blocks=1 00:05:02.055 --rc geninfo_unexecuted_blocks=1 00:05:02.055 00:05:02.055 ' 00:05:02.055 05:06:55 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:02.055 05:06:55 -- nvmf/common.sh@7 -- # uname -s 00:05:02.055 05:06:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:02.055 05:06:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:02.055 05:06:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:02.055 05:06:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:02.055 05:06:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:02.055 05:06:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:02.055 05:06:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:02.055 05:06:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:02.055 05:06:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:02.055 05:06:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:02.055 05:06:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d8a8bdb7-2cda-4d81-be2d-e7d8b190f340 00:05:02.055 05:06:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=d8a8bdb7-2cda-4d81-be2d-e7d8b190f340 00:05:02.055 05:06:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:02.055 05:06:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:02.055 05:06:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:02.055 05:06:55 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:02.055 05:06:55 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:02.055 05:06:55 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:02.055 05:06:55 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:02.055 05:06:55 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:02.055 05:06:55 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:02.055 05:06:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.055 05:06:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.055 05:06:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.055 05:06:55 -- paths/export.sh@5 -- # export PATH 00:05:02.055 05:06:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.055 05:06:55 -- nvmf/common.sh@51 -- # : 0 00:05:02.055 05:06:55 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:02.055 05:06:55 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:02.055 05:06:55 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:02.055 05:06:55 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:02.055 05:06:55 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:02.055 05:06:55 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:02.055 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:02.055 05:06:55 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:02.055 05:06:55 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:02.055 05:06:55 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:02.055 05:06:55 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:02.055 05:06:55 -- spdk/autotest.sh@32 -- # uname -s 00:05:02.055 05:06:55 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:02.055 05:06:55 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:02.055 05:06:55 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:02.055 05:06:55 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:05:02.055 05:06:55 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:02.055 05:06:55 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:02.055 05:06:55 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:02.055 05:06:55 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:02.055 05:06:55 -- spdk/autotest.sh@48 -- # udevadm_pid=66964 00:05:02.055 05:06:55 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:02.055 05:06:55 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:02.055 05:06:55 -- pm/common@17 -- # local monitor 00:05:02.055 05:06:55 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:02.055 05:06:55 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:02.055 05:06:55 -- pm/common@25 -- # sleep 1 00:05:02.055 05:06:55 -- pm/common@21 -- # date +%s 00:05:02.055 05:06:55 -- pm/common@21 -- # date +%s 00:05:02.055 05:06:55 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731215215 00:05:02.055 05:06:55 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731215215 00:05:02.055 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731215215_collect-cpu-load.pm.log 00:05:02.055 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731215215_collect-vmstat.pm.log 00:05:03.436 05:06:56 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:03.436 05:06:56 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:03.436 05:06:56 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:03.436 05:06:56 -- common/autotest_common.sh@10 -- # set +x 00:05:03.436 05:06:56 -- spdk/autotest.sh@59 -- # create_test_list 00:05:03.436 05:06:56 -- common/autotest_common.sh@748 -- # xtrace_disable 00:05:03.436 05:06:56 -- common/autotest_common.sh@10 -- # set +x 00:05:03.436 05:06:56 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:05:03.436 05:06:56 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:05:03.436 05:06:56 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:05:03.436 05:06:56 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:05:03.436 05:06:56 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:05:03.436 05:06:56 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:03.436 05:06:56 -- common/autotest_common.sh@1455 -- # uname 00:05:03.436 05:06:56 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:05:03.436 05:06:56 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:03.436 05:06:56 -- common/autotest_common.sh@1475 -- # uname 00:05:03.436 05:06:56 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:05:03.436 05:06:56 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:03.436 05:06:56 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:03.436 lcov: LCOV version 1.15 00:05:03.436 05:06:56 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:18.298 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:18.298 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:33.180 05:07:24 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:33.180 05:07:24 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:33.180 05:07:24 -- common/autotest_common.sh@10 -- # set +x 00:05:33.180 05:07:24 -- spdk/autotest.sh@78 -- # rm -f 00:05:33.180 05:07:24 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:33.180 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:33.180 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:33.180 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:33.180 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:05:33.180 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:05:33.180 05:07:25 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:33.180 05:07:25 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:05:33.180 05:07:25 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:05:33.180 05:07:25 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:05:33.180 05:07:25 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:33.180 05:07:25 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:05:33.180 05:07:25 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:05:33.180 05:07:25 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:33.180 05:07:25 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:33.180 05:07:25 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:33.180 05:07:25 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:05:33.180 05:07:25 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:05:33.180 05:07:25 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:33.180 05:07:25 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:33.180 05:07:25 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:33.180 05:07:25 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:05:33.180 05:07:25 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:05:33.180 05:07:25 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:33.180 05:07:25 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:33.180 05:07:25 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:33.180 05:07:25 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:05:33.180 05:07:25 -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:05:33.180 05:07:25 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:05:33.180 05:07:25 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:33.180 05:07:25 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:33.180 05:07:25 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:05:33.180 05:07:25 -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:05:33.180 05:07:25 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:05:33.180 05:07:25 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:33.180 05:07:25 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:33.180 05:07:25 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:05:33.180 05:07:25 -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:05:33.180 05:07:25 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:05:33.180 05:07:25 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:33.180 05:07:25 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:33.180 05:07:25 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:05:33.180 05:07:25 -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:05:33.180 05:07:25 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:05:33.180 05:07:25 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:33.180 05:07:25 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:33.180 05:07:25 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:33.180 05:07:25 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:33.180 05:07:25 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:33.180 05:07:25 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:33.180 05:07:25 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:33.180 No valid GPT data, bailing 00:05:33.180 05:07:25 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:33.180 05:07:25 -- scripts/common.sh@394 -- # pt= 00:05:33.180 05:07:25 -- scripts/common.sh@395 -- # return 1 00:05:33.180 05:07:25 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:33.180 1+0 records in 00:05:33.180 1+0 records out 00:05:33.180 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00552962 s, 190 MB/s 00:05:33.180 05:07:25 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:33.180 05:07:25 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:33.180 05:07:25 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:33.180 05:07:25 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:33.180 05:07:25 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:33.180 No valid GPT data, bailing 00:05:33.180 05:07:25 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:33.180 05:07:25 -- scripts/common.sh@394 -- # pt= 00:05:33.180 05:07:25 -- scripts/common.sh@395 -- # return 1 00:05:33.180 05:07:25 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:33.180 1+0 records in 00:05:33.180 1+0 records out 00:05:33.180 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0212807 s, 49.3 MB/s 00:05:33.180 05:07:25 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:33.180 05:07:25 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:33.180 05:07:25 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:05:33.180 05:07:25 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:05:33.180 05:07:25 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:05:33.180 No valid GPT data, bailing 00:05:33.180 05:07:25 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:05:33.180 05:07:25 -- scripts/common.sh@394 -- # pt= 00:05:33.180 05:07:25 -- scripts/common.sh@395 -- # return 1 00:05:33.180 05:07:25 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:05:33.180 1+0 records in 00:05:33.180 1+0 records out 00:05:33.180 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00567462 s, 185 MB/s 00:05:33.180 05:07:25 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:33.180 05:07:25 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:33.180 05:07:25 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:05:33.180 05:07:25 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:05:33.180 05:07:25 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:05:33.180 No valid GPT data, bailing 00:05:33.180 05:07:25 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:05:33.180 05:07:25 -- scripts/common.sh@394 -- # pt= 00:05:33.180 05:07:25 -- scripts/common.sh@395 -- # return 1 00:05:33.180 05:07:25 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:05:33.180 1+0 records in 00:05:33.180 1+0 records out 00:05:33.180 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00494314 s, 212 MB/s 00:05:33.180 05:07:25 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:33.180 05:07:25 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:33.180 05:07:25 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:05:33.180 05:07:25 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:05:33.180 05:07:25 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:05:33.181 No valid GPT data, bailing 00:05:33.181 05:07:25 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:05:33.181 05:07:25 -- scripts/common.sh@394 -- # pt= 00:05:33.181 05:07:25 -- scripts/common.sh@395 -- # return 1 00:05:33.181 05:07:25 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:05:33.181 1+0 records in 00:05:33.181 1+0 records out 00:05:33.181 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00550311 s, 191 MB/s 00:05:33.181 05:07:25 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:33.181 05:07:25 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:33.181 05:07:25 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:05:33.181 05:07:25 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:05:33.181 05:07:25 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:05:33.181 No valid GPT data, bailing 00:05:33.181 05:07:25 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:05:33.181 05:07:26 -- scripts/common.sh@394 -- # pt= 00:05:33.181 05:07:26 -- scripts/common.sh@395 -- # return 1 00:05:33.181 05:07:26 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:05:33.181 1+0 records in 00:05:33.181 1+0 records out 00:05:33.181 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00616296 s, 170 MB/s 00:05:33.181 05:07:26 -- spdk/autotest.sh@105 -- # sync 00:05:33.181 05:07:26 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:33.181 05:07:26 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:33.181 05:07:26 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:34.558 05:07:27 -- spdk/autotest.sh@111 -- # uname -s 00:05:34.558 05:07:27 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:34.558 05:07:27 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:34.558 05:07:27 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:34.818 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:35.384 Hugepages 00:05:35.384 node hugesize free / total 00:05:35.384 node0 1048576kB 0 / 0 00:05:35.384 node0 2048kB 0 / 0 00:05:35.384 00:05:35.384 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:35.384 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:35.384 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:35.384 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:35.642 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:05:35.642 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:05:35.642 05:07:28 -- spdk/autotest.sh@117 -- # uname -s 00:05:35.642 05:07:28 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:35.642 05:07:28 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:35.642 05:07:28 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:36.209 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:36.778 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:36.778 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:36.778 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:36.778 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:36.778 05:07:29 -- common/autotest_common.sh@1515 -- # sleep 1 00:05:37.719 05:07:30 -- common/autotest_common.sh@1516 -- # bdfs=() 00:05:37.719 05:07:30 -- common/autotest_common.sh@1516 -- # local bdfs 00:05:37.719 05:07:30 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:05:37.719 05:07:30 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:05:37.719 05:07:30 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:37.719 05:07:30 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:37.719 05:07:30 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:37.719 05:07:30 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:37.719 05:07:30 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:37.982 05:07:30 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:05:37.982 05:07:30 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:37.982 05:07:30 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:38.252 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:38.252 Waiting for block devices as requested 00:05:38.252 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:38.252 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:38.513 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:05:38.513 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:05:43.796 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:05:43.796 05:07:36 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:43.796 05:07:36 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:43.796 05:07:36 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:43.796 05:07:36 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:05:43.796 05:07:36 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:43.796 05:07:36 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:43.796 05:07:36 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:43.796 05:07:36 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:05:43.796 05:07:36 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:05:43.796 05:07:36 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:05:43.796 05:07:36 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:05:43.796 05:07:36 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:43.796 05:07:36 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:43.796 05:07:36 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:43.796 05:07:36 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:43.796 05:07:36 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:43.796 05:07:36 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:05:43.796 05:07:36 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:43.796 05:07:36 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:43.797 05:07:36 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:43.797 05:07:36 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:43.797 05:07:36 -- common/autotest_common.sh@1541 -- # continue 00:05:43.797 05:07:36 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:43.797 05:07:36 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:43.797 05:07:36 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:43.797 05:07:36 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:05:43.797 05:07:36 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:43.797 05:07:36 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:43.797 05:07:36 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:43.797 05:07:36 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:05:43.797 05:07:36 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:05:43.797 05:07:36 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:05:43.797 05:07:36 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:05:43.797 05:07:36 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:43.797 05:07:36 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:43.797 05:07:36 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:43.797 05:07:36 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:43.797 05:07:36 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:43.797 05:07:36 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:05:43.797 05:07:36 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:43.797 05:07:36 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:43.797 05:07:36 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:43.797 05:07:36 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:43.797 05:07:36 -- common/autotest_common.sh@1541 -- # continue 00:05:43.797 05:07:36 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:43.797 05:07:36 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:05:43.797 05:07:36 -- common/autotest_common.sh@1485 -- # grep 0000:00:12.0/nvme/nvme 00:05:43.797 05:07:36 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:43.797 05:07:36 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:43.797 05:07:36 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:05:43.797 05:07:36 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:43.797 05:07:36 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme2 00:05:43.797 05:07:36 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme2 00:05:43.797 05:07:36 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme2 ]] 00:05:43.797 05:07:36 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme2 00:05:43.797 05:07:36 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:43.797 05:07:36 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:43.797 05:07:36 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:43.797 05:07:36 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:43.797 05:07:36 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:43.797 05:07:36 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme2 00:05:43.797 05:07:36 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:43.797 05:07:36 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:43.797 05:07:36 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:43.797 05:07:36 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:43.797 05:07:36 -- common/autotest_common.sh@1541 -- # continue 00:05:43.797 05:07:36 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:43.797 05:07:36 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:05:43.797 05:07:36 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:43.797 05:07:36 -- common/autotest_common.sh@1485 -- # grep 0000:00:13.0/nvme/nvme 00:05:43.797 05:07:36 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:43.797 05:07:36 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:05:43.797 05:07:36 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:43.797 05:07:36 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme3 00:05:43.797 05:07:36 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme3 00:05:43.797 05:07:36 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme3 ]] 00:05:43.797 05:07:36 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme3 00:05:43.797 05:07:36 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:43.797 05:07:36 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:43.797 05:07:36 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:43.797 05:07:36 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:43.797 05:07:36 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:43.797 05:07:36 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme3 00:05:43.797 05:07:36 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:43.797 05:07:36 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:43.797 05:07:36 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:43.797 05:07:36 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:43.797 05:07:36 -- common/autotest_common.sh@1541 -- # continue 00:05:43.797 05:07:36 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:43.797 05:07:36 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:43.797 05:07:36 -- common/autotest_common.sh@10 -- # set +x 00:05:43.797 05:07:36 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:43.797 05:07:36 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:43.797 05:07:36 -- common/autotest_common.sh@10 -- # set +x 00:05:43.797 05:07:36 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:44.371 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:44.943 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:44.943 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:44.943 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:44.943 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:44.943 05:07:38 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:44.943 05:07:38 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:44.943 05:07:38 -- common/autotest_common.sh@10 -- # set +x 00:05:44.943 05:07:38 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:44.943 05:07:38 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:44.943 05:07:38 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:44.943 05:07:38 -- common/autotest_common.sh@1561 -- # bdfs=() 00:05:44.943 05:07:38 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:05:44.943 05:07:38 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:05:44.943 05:07:38 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:05:44.943 05:07:38 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:05:44.943 05:07:38 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:44.943 05:07:38 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:44.944 05:07:38 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:44.944 05:07:38 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:44.944 05:07:38 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:45.205 05:07:38 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:05:45.205 05:07:38 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:45.205 05:07:38 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:45.205 05:07:38 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:45.205 05:07:38 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:45.205 05:07:38 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:45.205 05:07:38 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:45.205 05:07:38 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:45.205 05:07:38 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:45.205 05:07:38 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:45.205 05:07:38 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:45.205 05:07:38 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:05:45.205 05:07:38 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:45.205 05:07:38 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:45.205 05:07:38 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:45.205 05:07:38 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:05:45.205 05:07:38 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:45.205 05:07:38 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:45.205 05:07:38 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:05:45.205 05:07:38 -- common/autotest_common.sh@1570 -- # return 0 00:05:45.205 05:07:38 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:05:45.205 05:07:38 -- common/autotest_common.sh@1578 -- # return 0 00:05:45.205 05:07:38 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:45.205 05:07:38 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:45.205 05:07:38 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:45.205 05:07:38 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:45.205 05:07:38 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:45.205 05:07:38 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:45.205 05:07:38 -- common/autotest_common.sh@10 -- # set +x 00:05:45.205 05:07:38 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:45.205 05:07:38 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:45.205 05:07:38 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:45.205 05:07:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:45.205 05:07:38 -- common/autotest_common.sh@10 -- # set +x 00:05:45.205 ************************************ 00:05:45.205 START TEST env 00:05:45.205 ************************************ 00:05:45.205 05:07:38 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:45.205 * Looking for test storage... 00:05:45.205 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:45.205 05:07:38 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:45.205 05:07:38 env -- common/autotest_common.sh@1681 -- # lcov --version 00:05:45.205 05:07:38 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:45.205 05:07:38 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:45.205 05:07:38 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:45.205 05:07:38 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:45.205 05:07:38 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:45.205 05:07:38 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:45.205 05:07:38 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:45.205 05:07:38 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:45.205 05:07:38 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:45.205 05:07:38 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:45.205 05:07:38 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:45.205 05:07:38 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:45.205 05:07:38 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:45.205 05:07:38 env -- scripts/common.sh@344 -- # case "$op" in 00:05:45.205 05:07:38 env -- scripts/common.sh@345 -- # : 1 00:05:45.205 05:07:38 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:45.205 05:07:38 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:45.205 05:07:38 env -- scripts/common.sh@365 -- # decimal 1 00:05:45.205 05:07:38 env -- scripts/common.sh@353 -- # local d=1 00:05:45.205 05:07:38 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:45.205 05:07:38 env -- scripts/common.sh@355 -- # echo 1 00:05:45.205 05:07:38 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:45.205 05:07:38 env -- scripts/common.sh@366 -- # decimal 2 00:05:45.205 05:07:38 env -- scripts/common.sh@353 -- # local d=2 00:05:45.205 05:07:38 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:45.205 05:07:38 env -- scripts/common.sh@355 -- # echo 2 00:05:45.205 05:07:38 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:45.205 05:07:38 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:45.205 05:07:38 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:45.205 05:07:38 env -- scripts/common.sh@368 -- # return 0 00:05:45.205 05:07:38 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:45.205 05:07:38 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:45.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.205 --rc genhtml_branch_coverage=1 00:05:45.205 --rc genhtml_function_coverage=1 00:05:45.205 --rc genhtml_legend=1 00:05:45.205 --rc geninfo_all_blocks=1 00:05:45.205 --rc geninfo_unexecuted_blocks=1 00:05:45.205 00:05:45.205 ' 00:05:45.205 05:07:38 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:45.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.205 --rc genhtml_branch_coverage=1 00:05:45.205 --rc genhtml_function_coverage=1 00:05:45.205 --rc genhtml_legend=1 00:05:45.205 --rc geninfo_all_blocks=1 00:05:45.205 --rc geninfo_unexecuted_blocks=1 00:05:45.205 00:05:45.205 ' 00:05:45.205 05:07:38 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:45.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.205 --rc genhtml_branch_coverage=1 00:05:45.205 --rc genhtml_function_coverage=1 00:05:45.205 --rc genhtml_legend=1 00:05:45.205 --rc geninfo_all_blocks=1 00:05:45.205 --rc geninfo_unexecuted_blocks=1 00:05:45.205 00:05:45.205 ' 00:05:45.205 05:07:38 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:45.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.205 --rc genhtml_branch_coverage=1 00:05:45.205 --rc genhtml_function_coverage=1 00:05:45.205 --rc genhtml_legend=1 00:05:45.205 --rc geninfo_all_blocks=1 00:05:45.206 --rc geninfo_unexecuted_blocks=1 00:05:45.206 00:05:45.206 ' 00:05:45.206 05:07:38 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:45.206 05:07:38 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:45.206 05:07:38 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:45.206 05:07:38 env -- common/autotest_common.sh@10 -- # set +x 00:05:45.206 ************************************ 00:05:45.206 START TEST env_memory 00:05:45.206 ************************************ 00:05:45.206 05:07:38 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:45.206 00:05:45.206 00:05:45.206 CUnit - A unit testing framework for C - Version 2.1-3 00:05:45.206 http://cunit.sourceforge.net/ 00:05:45.206 00:05:45.206 00:05:45.206 Suite: memory 00:05:45.206 Test: alloc and free memory map ...[2024-11-10 05:07:38.412108] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:45.467 passed 00:05:45.467 Test: mem map translation ...[2024-11-10 05:07:38.450891] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:45.467 [2024-11-10 05:07:38.451031] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:45.467 [2024-11-10 05:07:38.451143] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:45.467 [2024-11-10 05:07:38.451246] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:45.467 passed 00:05:45.467 Test: mem map registration ...[2024-11-10 05:07:38.519469] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:45.467 [2024-11-10 05:07:38.519575] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:45.467 passed 00:05:45.467 Test: mem map adjacent registrations ...passed 00:05:45.467 00:05:45.467 Run Summary: Type Total Ran Passed Failed Inactive 00:05:45.467 suites 1 1 n/a 0 0 00:05:45.467 tests 4 4 4 0 0 00:05:45.467 asserts 152 152 152 0 n/a 00:05:45.467 00:05:45.467 Elapsed time = 0.233 seconds 00:05:45.467 00:05:45.467 real 0m0.265s 00:05:45.467 user 0m0.238s 00:05:45.467 sys 0m0.019s 00:05:45.467 05:07:38 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:45.467 ************************************ 00:05:45.467 END TEST env_memory 00:05:45.467 ************************************ 00:05:45.467 05:07:38 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:45.467 05:07:38 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:45.467 05:07:38 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:45.467 05:07:38 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:45.467 05:07:38 env -- common/autotest_common.sh@10 -- # set +x 00:05:45.467 ************************************ 00:05:45.467 START TEST env_vtophys 00:05:45.467 ************************************ 00:05:45.467 05:07:38 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:45.467 EAL: lib.eal log level changed from notice to debug 00:05:45.467 EAL: Detected lcore 0 as core 0 on socket 0 00:05:45.467 EAL: Detected lcore 1 as core 0 on socket 0 00:05:45.467 EAL: Detected lcore 2 as core 0 on socket 0 00:05:45.467 EAL: Detected lcore 3 as core 0 on socket 0 00:05:45.467 EAL: Detected lcore 4 as core 0 on socket 0 00:05:45.467 EAL: Detected lcore 5 as core 0 on socket 0 00:05:45.468 EAL: Detected lcore 6 as core 0 on socket 0 00:05:45.468 EAL: Detected lcore 7 as core 0 on socket 0 00:05:45.468 EAL: Detected lcore 8 as core 0 on socket 0 00:05:45.468 EAL: Detected lcore 9 as core 0 on socket 0 00:05:45.468 EAL: Maximum logical cores by configuration: 128 00:05:45.468 EAL: Detected CPU lcores: 10 00:05:45.468 EAL: Detected NUMA nodes: 1 00:05:45.468 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:45.468 EAL: Detected shared linkage of DPDK 00:05:45.468 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:05:45.468 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:05:45.468 EAL: Registered [vdev] bus. 00:05:45.468 EAL: bus.vdev log level changed from disabled to notice 00:05:45.468 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:05:45.468 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:05:45.468 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:45.468 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:45.468 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:45.468 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:45.468 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:45.468 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:45.729 EAL: No shared files mode enabled, IPC will be disabled 00:05:45.729 EAL: No shared files mode enabled, IPC is disabled 00:05:45.729 EAL: Selected IOVA mode 'PA' 00:05:45.729 EAL: Probing VFIO support... 00:05:45.729 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:45.729 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:45.729 EAL: Ask a virtual area of 0x2e000 bytes 00:05:45.729 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:45.729 EAL: Setting up physically contiguous memory... 00:05:45.729 EAL: Setting maximum number of open files to 524288 00:05:45.729 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:45.729 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:45.729 EAL: Ask a virtual area of 0x61000 bytes 00:05:45.729 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:45.729 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:45.729 EAL: Ask a virtual area of 0x400000000 bytes 00:05:45.729 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:45.729 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:45.729 EAL: Ask a virtual area of 0x61000 bytes 00:05:45.729 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:45.729 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:45.729 EAL: Ask a virtual area of 0x400000000 bytes 00:05:45.729 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:45.729 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:45.729 EAL: Ask a virtual area of 0x61000 bytes 00:05:45.729 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:45.729 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:45.729 EAL: Ask a virtual area of 0x400000000 bytes 00:05:45.729 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:45.729 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:45.729 EAL: Ask a virtual area of 0x61000 bytes 00:05:45.729 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:45.729 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:45.729 EAL: Ask a virtual area of 0x400000000 bytes 00:05:45.729 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:45.729 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:45.729 EAL: Hugepages will be freed exactly as allocated. 00:05:45.729 EAL: No shared files mode enabled, IPC is disabled 00:05:45.729 EAL: No shared files mode enabled, IPC is disabled 00:05:45.729 EAL: TSC frequency is ~2600000 KHz 00:05:45.729 EAL: Main lcore 0 is ready (tid=7fe3b58afa40;cpuset=[0]) 00:05:45.729 EAL: Trying to obtain current memory policy. 00:05:45.729 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:45.729 EAL: Restoring previous memory policy: 0 00:05:45.729 EAL: request: mp_malloc_sync 00:05:45.729 EAL: No shared files mode enabled, IPC is disabled 00:05:45.729 EAL: Heap on socket 0 was expanded by 2MB 00:05:45.729 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:45.729 EAL: No shared files mode enabled, IPC is disabled 00:05:45.729 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:45.729 EAL: Mem event callback 'spdk:(nil)' registered 00:05:45.729 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:45.729 00:05:45.729 00:05:45.729 CUnit - A unit testing framework for C - Version 2.1-3 00:05:45.729 http://cunit.sourceforge.net/ 00:05:45.729 00:05:45.729 00:05:45.729 Suite: components_suite 00:05:45.990 Test: vtophys_malloc_test ...passed 00:05:45.990 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:45.990 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:45.990 EAL: Restoring previous memory policy: 4 00:05:45.990 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.990 EAL: request: mp_malloc_sync 00:05:45.990 EAL: No shared files mode enabled, IPC is disabled 00:05:45.990 EAL: Heap on socket 0 was expanded by 4MB 00:05:45.990 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.990 EAL: request: mp_malloc_sync 00:05:45.990 EAL: No shared files mode enabled, IPC is disabled 00:05:45.990 EAL: Heap on socket 0 was shrunk by 4MB 00:05:45.990 EAL: Trying to obtain current memory policy. 00:05:45.990 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:45.990 EAL: Restoring previous memory policy: 4 00:05:45.990 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.990 EAL: request: mp_malloc_sync 00:05:45.990 EAL: No shared files mode enabled, IPC is disabled 00:05:45.990 EAL: Heap on socket 0 was expanded by 6MB 00:05:45.990 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.990 EAL: request: mp_malloc_sync 00:05:45.990 EAL: No shared files mode enabled, IPC is disabled 00:05:45.990 EAL: Heap on socket 0 was shrunk by 6MB 00:05:45.990 EAL: Trying to obtain current memory policy. 00:05:45.990 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:45.990 EAL: Restoring previous memory policy: 4 00:05:45.990 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.990 EAL: request: mp_malloc_sync 00:05:45.990 EAL: No shared files mode enabled, IPC is disabled 00:05:45.990 EAL: Heap on socket 0 was expanded by 10MB 00:05:45.990 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.990 EAL: request: mp_malloc_sync 00:05:45.990 EAL: No shared files mode enabled, IPC is disabled 00:05:45.990 EAL: Heap on socket 0 was shrunk by 10MB 00:05:45.990 EAL: Trying to obtain current memory policy. 00:05:45.990 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:45.990 EAL: Restoring previous memory policy: 4 00:05:45.990 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.990 EAL: request: mp_malloc_sync 00:05:45.990 EAL: No shared files mode enabled, IPC is disabled 00:05:45.990 EAL: Heap on socket 0 was expanded by 18MB 00:05:45.990 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.990 EAL: request: mp_malloc_sync 00:05:45.990 EAL: No shared files mode enabled, IPC is disabled 00:05:45.990 EAL: Heap on socket 0 was shrunk by 18MB 00:05:45.990 EAL: Trying to obtain current memory policy. 00:05:45.990 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:45.990 EAL: Restoring previous memory policy: 4 00:05:45.990 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.990 EAL: request: mp_malloc_sync 00:05:45.990 EAL: No shared files mode enabled, IPC is disabled 00:05:45.990 EAL: Heap on socket 0 was expanded by 34MB 00:05:45.990 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.990 EAL: request: mp_malloc_sync 00:05:45.990 EAL: No shared files mode enabled, IPC is disabled 00:05:45.990 EAL: Heap on socket 0 was shrunk by 34MB 00:05:45.990 EAL: Trying to obtain current memory policy. 00:05:45.990 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:45.990 EAL: Restoring previous memory policy: 4 00:05:45.990 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.990 EAL: request: mp_malloc_sync 00:05:45.990 EAL: No shared files mode enabled, IPC is disabled 00:05:45.990 EAL: Heap on socket 0 was expanded by 66MB 00:05:45.990 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.990 EAL: request: mp_malloc_sync 00:05:45.990 EAL: No shared files mode enabled, IPC is disabled 00:05:45.990 EAL: Heap on socket 0 was shrunk by 66MB 00:05:45.990 EAL: Trying to obtain current memory policy. 00:05:45.990 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:45.990 EAL: Restoring previous memory policy: 4 00:05:45.990 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.990 EAL: request: mp_malloc_sync 00:05:45.990 EAL: No shared files mode enabled, IPC is disabled 00:05:45.990 EAL: Heap on socket 0 was expanded by 130MB 00:05:45.990 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.990 EAL: request: mp_malloc_sync 00:05:45.990 EAL: No shared files mode enabled, IPC is disabled 00:05:45.990 EAL: Heap on socket 0 was shrunk by 130MB 00:05:45.990 EAL: Trying to obtain current memory policy. 00:05:45.990 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:46.251 EAL: Restoring previous memory policy: 4 00:05:46.251 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.251 EAL: request: mp_malloc_sync 00:05:46.251 EAL: No shared files mode enabled, IPC is disabled 00:05:46.251 EAL: Heap on socket 0 was expanded by 258MB 00:05:46.251 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.251 EAL: request: mp_malloc_sync 00:05:46.251 EAL: No shared files mode enabled, IPC is disabled 00:05:46.251 EAL: Heap on socket 0 was shrunk by 258MB 00:05:46.251 EAL: Trying to obtain current memory policy. 00:05:46.251 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:46.251 EAL: Restoring previous memory policy: 4 00:05:46.251 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.251 EAL: request: mp_malloc_sync 00:05:46.251 EAL: No shared files mode enabled, IPC is disabled 00:05:46.251 EAL: Heap on socket 0 was expanded by 514MB 00:05:46.251 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.512 EAL: request: mp_malloc_sync 00:05:46.512 EAL: No shared files mode enabled, IPC is disabled 00:05:46.512 EAL: Heap on socket 0 was shrunk by 514MB 00:05:46.512 EAL: Trying to obtain current memory policy. 00:05:46.512 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:46.512 EAL: Restoring previous memory policy: 4 00:05:46.512 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.512 EAL: request: mp_malloc_sync 00:05:46.512 EAL: No shared files mode enabled, IPC is disabled 00:05:46.512 EAL: Heap on socket 0 was expanded by 1026MB 00:05:46.781 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.781 passed 00:05:46.781 00:05:46.781 Run Summary: Type Total Ran Passed Failed Inactive 00:05:46.781 suites 1 1 n/a 0 0 00:05:46.781 tests 2 2 2 0 0 00:05:46.781 asserts 5470 5470 5470 0 n/a 00:05:46.781 00:05:46.781 Elapsed time = 1.040 seconds 00:05:46.781 EAL: request: mp_malloc_sync 00:05:46.781 EAL: No shared files mode enabled, IPC is disabled 00:05:46.781 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:46.781 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.781 EAL: request: mp_malloc_sync 00:05:46.781 EAL: No shared files mode enabled, IPC is disabled 00:05:46.781 EAL: Heap on socket 0 was shrunk by 2MB 00:05:46.781 EAL: No shared files mode enabled, IPC is disabled 00:05:46.781 EAL: No shared files mode enabled, IPC is disabled 00:05:46.781 EAL: No shared files mode enabled, IPC is disabled 00:05:46.781 00:05:46.781 real 0m1.269s 00:05:46.781 user 0m0.499s 00:05:46.781 sys 0m0.632s 00:05:46.781 05:07:39 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:46.781 ************************************ 00:05:46.781 END TEST env_vtophys 00:05:46.781 ************************************ 00:05:46.781 05:07:39 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:46.781 05:07:39 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:46.781 05:07:39 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:46.781 05:07:39 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:46.781 05:07:39 env -- common/autotest_common.sh@10 -- # set +x 00:05:46.781 ************************************ 00:05:46.781 START TEST env_pci 00:05:46.781 ************************************ 00:05:46.781 05:07:39 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:46.781 00:05:46.781 00:05:46.781 CUnit - A unit testing framework for C - Version 2.1-3 00:05:46.781 http://cunit.sourceforge.net/ 00:05:46.781 00:05:46.781 00:05:46.781 Suite: pci 00:05:46.781 Test: pci_hook ...[2024-11-10 05:07:39.991678] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 69672 has claimed it 00:05:47.042 passed 00:05:47.042 00:05:47.042 EAL: Cannot find device (10000:00:01.0) 00:05:47.042 EAL: Failed to attach device on primary process 00:05:47.042 Run Summary: Type Total Ran Passed Failed Inactive 00:05:47.042 suites 1 1 n/a 0 0 00:05:47.042 tests 1 1 1 0 0 00:05:47.042 asserts 25 25 25 0 n/a 00:05:47.042 00:05:47.042 Elapsed time = 0.005 seconds 00:05:47.042 00:05:47.042 real 0m0.049s 00:05:47.042 user 0m0.019s 00:05:47.042 sys 0m0.029s 00:05:47.042 05:07:40 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:47.042 ************************************ 00:05:47.042 END TEST env_pci 00:05:47.042 ************************************ 00:05:47.042 05:07:40 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:47.042 05:07:40 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:47.042 05:07:40 env -- env/env.sh@15 -- # uname 00:05:47.042 05:07:40 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:47.042 05:07:40 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:47.042 05:07:40 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:47.042 05:07:40 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:05:47.042 05:07:40 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:47.042 05:07:40 env -- common/autotest_common.sh@10 -- # set +x 00:05:47.042 ************************************ 00:05:47.042 START TEST env_dpdk_post_init 00:05:47.042 ************************************ 00:05:47.042 05:07:40 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:47.042 EAL: Detected CPU lcores: 10 00:05:47.042 EAL: Detected NUMA nodes: 1 00:05:47.042 EAL: Detected shared linkage of DPDK 00:05:47.042 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:47.042 EAL: Selected IOVA mode 'PA' 00:05:47.042 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:47.042 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:47.042 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:47.042 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:05:47.042 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:05:47.304 Starting DPDK initialization... 00:05:47.304 Starting SPDK post initialization... 00:05:47.304 SPDK NVMe probe 00:05:47.304 Attaching to 0000:00:10.0 00:05:47.304 Attaching to 0000:00:11.0 00:05:47.304 Attaching to 0000:00:12.0 00:05:47.304 Attaching to 0000:00:13.0 00:05:47.304 Attached to 0000:00:10.0 00:05:47.304 Attached to 0000:00:11.0 00:05:47.304 Attached to 0000:00:13.0 00:05:47.304 Attached to 0000:00:12.0 00:05:47.304 Cleaning up... 00:05:47.304 00:05:47.304 real 0m0.225s 00:05:47.304 user 0m0.072s 00:05:47.304 sys 0m0.055s 00:05:47.304 05:07:40 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:47.304 05:07:40 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:47.304 ************************************ 00:05:47.304 END TEST env_dpdk_post_init 00:05:47.304 ************************************ 00:05:47.304 05:07:40 env -- env/env.sh@26 -- # uname 00:05:47.304 05:07:40 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:47.304 05:07:40 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:47.304 05:07:40 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:47.304 05:07:40 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:47.304 05:07:40 env -- common/autotest_common.sh@10 -- # set +x 00:05:47.304 ************************************ 00:05:47.304 START TEST env_mem_callbacks 00:05:47.304 ************************************ 00:05:47.304 05:07:40 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:47.304 EAL: Detected CPU lcores: 10 00:05:47.304 EAL: Detected NUMA nodes: 1 00:05:47.304 EAL: Detected shared linkage of DPDK 00:05:47.304 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:47.304 EAL: Selected IOVA mode 'PA' 00:05:47.304 00:05:47.304 00:05:47.304 CUnit - A unit testing framework for C - Version 2.1-3 00:05:47.304 http://cunit.sourceforge.net/ 00:05:47.304 00:05:47.304 00:05:47.304 Suite: memory 00:05:47.304 Test: test ... 00:05:47.304 register 0x200000200000 2097152 00:05:47.304 malloc 3145728 00:05:47.304 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:47.304 register 0x200000400000 4194304 00:05:47.304 buf 0x200000500000 len 3145728 PASSED 00:05:47.304 malloc 64 00:05:47.304 buf 0x2000004fff40 len 64 PASSED 00:05:47.304 malloc 4194304 00:05:47.304 register 0x200000800000 6291456 00:05:47.304 buf 0x200000a00000 len 4194304 PASSED 00:05:47.304 free 0x200000500000 3145728 00:05:47.304 free 0x2000004fff40 64 00:05:47.304 unregister 0x200000400000 4194304 PASSED 00:05:47.304 free 0x200000a00000 4194304 00:05:47.304 unregister 0x200000800000 6291456 PASSED 00:05:47.304 malloc 8388608 00:05:47.304 register 0x200000400000 10485760 00:05:47.304 buf 0x200000600000 len 8388608 PASSED 00:05:47.304 free 0x200000600000 8388608 00:05:47.304 unregister 0x200000400000 10485760 PASSED 00:05:47.304 passed 00:05:47.304 00:05:47.304 Run Summary: Type Total Ran Passed Failed Inactive 00:05:47.304 suites 1 1 n/a 0 0 00:05:47.304 tests 1 1 1 0 0 00:05:47.304 asserts 15 15 15 0 n/a 00:05:47.304 00:05:47.304 Elapsed time = 0.012 seconds 00:05:47.304 00:05:47.304 real 0m0.176s 00:05:47.304 user 0m0.026s 00:05:47.304 sys 0m0.047s 00:05:47.304 ************************************ 00:05:47.304 END TEST env_mem_callbacks 00:05:47.304 ************************************ 00:05:47.304 05:07:40 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:47.304 05:07:40 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:47.565 ************************************ 00:05:47.565 END TEST env 00:05:47.565 ************************************ 00:05:47.565 00:05:47.565 real 0m2.358s 00:05:47.565 user 0m0.989s 00:05:47.565 sys 0m0.991s 00:05:47.565 05:07:40 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:47.565 05:07:40 env -- common/autotest_common.sh@10 -- # set +x 00:05:47.565 05:07:40 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:47.565 05:07:40 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:47.565 05:07:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:47.565 05:07:40 -- common/autotest_common.sh@10 -- # set +x 00:05:47.565 ************************************ 00:05:47.565 START TEST rpc 00:05:47.565 ************************************ 00:05:47.565 05:07:40 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:47.565 * Looking for test storage... 00:05:47.565 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:47.565 05:07:40 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:47.565 05:07:40 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:47.565 05:07:40 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:47.565 05:07:40 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:47.565 05:07:40 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:47.565 05:07:40 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:47.565 05:07:40 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:47.565 05:07:40 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:47.565 05:07:40 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:47.565 05:07:40 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:47.565 05:07:40 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:47.565 05:07:40 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:47.565 05:07:40 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:47.565 05:07:40 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:47.565 05:07:40 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:47.565 05:07:40 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:47.565 05:07:40 rpc -- scripts/common.sh@345 -- # : 1 00:05:47.565 05:07:40 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:47.565 05:07:40 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:47.565 05:07:40 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:47.565 05:07:40 rpc -- scripts/common.sh@353 -- # local d=1 00:05:47.565 05:07:40 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:47.565 05:07:40 rpc -- scripts/common.sh@355 -- # echo 1 00:05:47.565 05:07:40 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:47.565 05:07:40 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:47.565 05:07:40 rpc -- scripts/common.sh@353 -- # local d=2 00:05:47.565 05:07:40 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:47.565 05:07:40 rpc -- scripts/common.sh@355 -- # echo 2 00:05:47.565 05:07:40 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:47.565 05:07:40 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:47.565 05:07:40 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:47.565 05:07:40 rpc -- scripts/common.sh@368 -- # return 0 00:05:47.566 05:07:40 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:47.566 05:07:40 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:47.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.566 --rc genhtml_branch_coverage=1 00:05:47.566 --rc genhtml_function_coverage=1 00:05:47.566 --rc genhtml_legend=1 00:05:47.566 --rc geninfo_all_blocks=1 00:05:47.566 --rc geninfo_unexecuted_blocks=1 00:05:47.566 00:05:47.566 ' 00:05:47.566 05:07:40 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:47.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.566 --rc genhtml_branch_coverage=1 00:05:47.566 --rc genhtml_function_coverage=1 00:05:47.566 --rc genhtml_legend=1 00:05:47.566 --rc geninfo_all_blocks=1 00:05:47.566 --rc geninfo_unexecuted_blocks=1 00:05:47.566 00:05:47.566 ' 00:05:47.566 05:07:40 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:47.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.566 --rc genhtml_branch_coverage=1 00:05:47.566 --rc genhtml_function_coverage=1 00:05:47.566 --rc genhtml_legend=1 00:05:47.566 --rc geninfo_all_blocks=1 00:05:47.566 --rc geninfo_unexecuted_blocks=1 00:05:47.566 00:05:47.566 ' 00:05:47.566 05:07:40 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:47.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.566 --rc genhtml_branch_coverage=1 00:05:47.566 --rc genhtml_function_coverage=1 00:05:47.566 --rc genhtml_legend=1 00:05:47.566 --rc geninfo_all_blocks=1 00:05:47.566 --rc geninfo_unexecuted_blocks=1 00:05:47.566 00:05:47.566 ' 00:05:47.566 05:07:40 rpc -- rpc/rpc.sh@65 -- # spdk_pid=69799 00:05:47.566 05:07:40 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:47.566 05:07:40 rpc -- rpc/rpc.sh@67 -- # waitforlisten 69799 00:05:47.566 05:07:40 rpc -- common/autotest_common.sh@831 -- # '[' -z 69799 ']' 00:05:47.566 05:07:40 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:47.566 05:07:40 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.566 05:07:40 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:47.566 05:07:40 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.566 05:07:40 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:47.566 05:07:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.825 [2024-11-10 05:07:40.871162] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:47.825 [2024-11-10 05:07:40.871475] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69799 ] 00:05:47.825 [2024-11-10 05:07:41.012926] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.085 [2024-11-10 05:07:41.064847] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:48.085 [2024-11-10 05:07:41.064922] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 69799' to capture a snapshot of events at runtime. 00:05:48.085 [2024-11-10 05:07:41.064936] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:48.085 [2024-11-10 05:07:41.064950] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:48.085 [2024-11-10 05:07:41.064965] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid69799 for offline analysis/debug. 00:05:48.085 [2024-11-10 05:07:41.065029] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.658 05:07:41 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:48.658 05:07:41 rpc -- common/autotest_common.sh@864 -- # return 0 00:05:48.658 05:07:41 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:48.658 05:07:41 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:48.658 05:07:41 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:48.658 05:07:41 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:48.658 05:07:41 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:48.658 05:07:41 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:48.658 05:07:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.658 ************************************ 00:05:48.658 START TEST rpc_integrity 00:05:48.658 ************************************ 00:05:48.658 05:07:41 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:48.658 05:07:41 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:48.658 05:07:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.658 05:07:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.658 05:07:41 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.658 05:07:41 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:48.658 05:07:41 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:48.658 05:07:41 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:48.658 05:07:41 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:48.658 05:07:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.658 05:07:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.658 05:07:41 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.658 05:07:41 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:48.658 05:07:41 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:48.658 05:07:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.658 05:07:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.658 05:07:41 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.658 05:07:41 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:48.658 { 00:05:48.658 "name": "Malloc0", 00:05:48.658 "aliases": [ 00:05:48.658 "25b544ba-a61c-4942-a414-139322f9f245" 00:05:48.658 ], 00:05:48.658 "product_name": "Malloc disk", 00:05:48.658 "block_size": 512, 00:05:48.658 "num_blocks": 16384, 00:05:48.658 "uuid": "25b544ba-a61c-4942-a414-139322f9f245", 00:05:48.658 "assigned_rate_limits": { 00:05:48.658 "rw_ios_per_sec": 0, 00:05:48.658 "rw_mbytes_per_sec": 0, 00:05:48.658 "r_mbytes_per_sec": 0, 00:05:48.658 "w_mbytes_per_sec": 0 00:05:48.658 }, 00:05:48.658 "claimed": false, 00:05:48.658 "zoned": false, 00:05:48.658 "supported_io_types": { 00:05:48.658 "read": true, 00:05:48.658 "write": true, 00:05:48.658 "unmap": true, 00:05:48.658 "flush": true, 00:05:48.658 "reset": true, 00:05:48.658 "nvme_admin": false, 00:05:48.658 "nvme_io": false, 00:05:48.658 "nvme_io_md": false, 00:05:48.658 "write_zeroes": true, 00:05:48.658 "zcopy": true, 00:05:48.658 "get_zone_info": false, 00:05:48.658 "zone_management": false, 00:05:48.658 "zone_append": false, 00:05:48.658 "compare": false, 00:05:48.658 "compare_and_write": false, 00:05:48.658 "abort": true, 00:05:48.658 "seek_hole": false, 00:05:48.658 "seek_data": false, 00:05:48.658 "copy": true, 00:05:48.658 "nvme_iov_md": false 00:05:48.658 }, 00:05:48.658 "memory_domains": [ 00:05:48.658 { 00:05:48.658 "dma_device_id": "system", 00:05:48.658 "dma_device_type": 1 00:05:48.658 }, 00:05:48.658 { 00:05:48.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:48.658 "dma_device_type": 2 00:05:48.658 } 00:05:48.658 ], 00:05:48.658 "driver_specific": {} 00:05:48.658 } 00:05:48.658 ]' 00:05:48.658 05:07:41 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:48.658 05:07:41 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:48.658 05:07:41 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:48.658 05:07:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.658 05:07:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.658 [2024-11-10 05:07:41.815597] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:48.658 [2024-11-10 05:07:41.815656] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:48.658 [2024-11-10 05:07:41.815678] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:05:48.658 [2024-11-10 05:07:41.815688] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:48.658 [2024-11-10 05:07:41.817873] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:48.658 [2024-11-10 05:07:41.817910] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:48.658 Passthru0 00:05:48.658 05:07:41 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.658 05:07:41 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:48.658 05:07:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.658 05:07:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.658 05:07:41 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.658 05:07:41 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:48.658 { 00:05:48.658 "name": "Malloc0", 00:05:48.658 "aliases": [ 00:05:48.658 "25b544ba-a61c-4942-a414-139322f9f245" 00:05:48.658 ], 00:05:48.658 "product_name": "Malloc disk", 00:05:48.658 "block_size": 512, 00:05:48.658 "num_blocks": 16384, 00:05:48.658 "uuid": "25b544ba-a61c-4942-a414-139322f9f245", 00:05:48.658 "assigned_rate_limits": { 00:05:48.658 "rw_ios_per_sec": 0, 00:05:48.658 "rw_mbytes_per_sec": 0, 00:05:48.658 "r_mbytes_per_sec": 0, 00:05:48.658 "w_mbytes_per_sec": 0 00:05:48.658 }, 00:05:48.658 "claimed": true, 00:05:48.658 "claim_type": "exclusive_write", 00:05:48.658 "zoned": false, 00:05:48.658 "supported_io_types": { 00:05:48.658 "read": true, 00:05:48.658 "write": true, 00:05:48.658 "unmap": true, 00:05:48.658 "flush": true, 00:05:48.658 "reset": true, 00:05:48.658 "nvme_admin": false, 00:05:48.658 "nvme_io": false, 00:05:48.658 "nvme_io_md": false, 00:05:48.658 "write_zeroes": true, 00:05:48.658 "zcopy": true, 00:05:48.658 "get_zone_info": false, 00:05:48.658 "zone_management": false, 00:05:48.658 "zone_append": false, 00:05:48.658 "compare": false, 00:05:48.658 "compare_and_write": false, 00:05:48.658 "abort": true, 00:05:48.658 "seek_hole": false, 00:05:48.658 "seek_data": false, 00:05:48.658 "copy": true, 00:05:48.658 "nvme_iov_md": false 00:05:48.658 }, 00:05:48.658 "memory_domains": [ 00:05:48.658 { 00:05:48.658 "dma_device_id": "system", 00:05:48.658 "dma_device_type": 1 00:05:48.658 }, 00:05:48.658 { 00:05:48.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:48.658 "dma_device_type": 2 00:05:48.658 } 00:05:48.658 ], 00:05:48.658 "driver_specific": {} 00:05:48.658 }, 00:05:48.658 { 00:05:48.658 "name": "Passthru0", 00:05:48.658 "aliases": [ 00:05:48.658 "190a59d1-23fc-5343-85ed-de128e977d17" 00:05:48.658 ], 00:05:48.658 "product_name": "passthru", 00:05:48.658 "block_size": 512, 00:05:48.658 "num_blocks": 16384, 00:05:48.658 "uuid": "190a59d1-23fc-5343-85ed-de128e977d17", 00:05:48.658 "assigned_rate_limits": { 00:05:48.658 "rw_ios_per_sec": 0, 00:05:48.658 "rw_mbytes_per_sec": 0, 00:05:48.658 "r_mbytes_per_sec": 0, 00:05:48.659 "w_mbytes_per_sec": 0 00:05:48.659 }, 00:05:48.659 "claimed": false, 00:05:48.659 "zoned": false, 00:05:48.659 "supported_io_types": { 00:05:48.659 "read": true, 00:05:48.659 "write": true, 00:05:48.659 "unmap": true, 00:05:48.659 "flush": true, 00:05:48.659 "reset": true, 00:05:48.659 "nvme_admin": false, 00:05:48.659 "nvme_io": false, 00:05:48.659 "nvme_io_md": false, 00:05:48.659 "write_zeroes": true, 00:05:48.659 "zcopy": true, 00:05:48.659 "get_zone_info": false, 00:05:48.659 "zone_management": false, 00:05:48.659 "zone_append": false, 00:05:48.659 "compare": false, 00:05:48.659 "compare_and_write": false, 00:05:48.659 "abort": true, 00:05:48.659 "seek_hole": false, 00:05:48.659 "seek_data": false, 00:05:48.659 "copy": true, 00:05:48.659 "nvme_iov_md": false 00:05:48.659 }, 00:05:48.659 "memory_domains": [ 00:05:48.659 { 00:05:48.659 "dma_device_id": "system", 00:05:48.659 "dma_device_type": 1 00:05:48.659 }, 00:05:48.659 { 00:05:48.659 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:48.659 "dma_device_type": 2 00:05:48.659 } 00:05:48.659 ], 00:05:48.659 "driver_specific": { 00:05:48.659 "passthru": { 00:05:48.659 "name": "Passthru0", 00:05:48.659 "base_bdev_name": "Malloc0" 00:05:48.659 } 00:05:48.659 } 00:05:48.659 } 00:05:48.659 ]' 00:05:48.659 05:07:41 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:48.659 05:07:41 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:48.659 05:07:41 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:48.659 05:07:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.659 05:07:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.659 05:07:41 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.659 05:07:41 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:48.659 05:07:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.659 05:07:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.659 05:07:41 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.659 05:07:41 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:48.659 05:07:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.659 05:07:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.920 05:07:41 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.920 05:07:41 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:48.920 05:07:41 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:48.920 ************************************ 00:05:48.920 END TEST rpc_integrity 00:05:48.920 ************************************ 00:05:48.920 05:07:41 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:48.920 00:05:48.920 real 0m0.217s 00:05:48.920 user 0m0.127s 00:05:48.920 sys 0m0.029s 00:05:48.920 05:07:41 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:48.920 05:07:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.920 05:07:41 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:48.920 05:07:41 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:48.920 05:07:41 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:48.920 05:07:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.920 ************************************ 00:05:48.920 START TEST rpc_plugins 00:05:48.920 ************************************ 00:05:48.920 05:07:41 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:05:48.920 05:07:41 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:48.920 05:07:41 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.920 05:07:41 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:48.920 05:07:41 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.920 05:07:41 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:48.920 05:07:41 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:48.920 05:07:41 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.920 05:07:41 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:48.920 05:07:41 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.920 05:07:41 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:48.920 { 00:05:48.920 "name": "Malloc1", 00:05:48.920 "aliases": [ 00:05:48.920 "d4756f63-3102-4551-a7c3-533a87901996" 00:05:48.920 ], 00:05:48.920 "product_name": "Malloc disk", 00:05:48.920 "block_size": 4096, 00:05:48.920 "num_blocks": 256, 00:05:48.920 "uuid": "d4756f63-3102-4551-a7c3-533a87901996", 00:05:48.920 "assigned_rate_limits": { 00:05:48.920 "rw_ios_per_sec": 0, 00:05:48.920 "rw_mbytes_per_sec": 0, 00:05:48.920 "r_mbytes_per_sec": 0, 00:05:48.920 "w_mbytes_per_sec": 0 00:05:48.920 }, 00:05:48.920 "claimed": false, 00:05:48.920 "zoned": false, 00:05:48.920 "supported_io_types": { 00:05:48.920 "read": true, 00:05:48.920 "write": true, 00:05:48.920 "unmap": true, 00:05:48.920 "flush": true, 00:05:48.920 "reset": true, 00:05:48.920 "nvme_admin": false, 00:05:48.920 "nvme_io": false, 00:05:48.920 "nvme_io_md": false, 00:05:48.920 "write_zeroes": true, 00:05:48.920 "zcopy": true, 00:05:48.920 "get_zone_info": false, 00:05:48.920 "zone_management": false, 00:05:48.920 "zone_append": false, 00:05:48.920 "compare": false, 00:05:48.920 "compare_and_write": false, 00:05:48.920 "abort": true, 00:05:48.920 "seek_hole": false, 00:05:48.920 "seek_data": false, 00:05:48.920 "copy": true, 00:05:48.920 "nvme_iov_md": false 00:05:48.920 }, 00:05:48.920 "memory_domains": [ 00:05:48.920 { 00:05:48.920 "dma_device_id": "system", 00:05:48.920 "dma_device_type": 1 00:05:48.920 }, 00:05:48.920 { 00:05:48.920 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:48.920 "dma_device_type": 2 00:05:48.920 } 00:05:48.920 ], 00:05:48.920 "driver_specific": {} 00:05:48.920 } 00:05:48.920 ]' 00:05:48.920 05:07:41 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:48.920 05:07:42 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:48.920 05:07:42 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:48.920 05:07:42 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.920 05:07:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:48.920 05:07:42 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.920 05:07:42 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:48.921 05:07:42 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.921 05:07:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:48.921 05:07:42 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.921 05:07:42 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:48.921 05:07:42 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:48.921 ************************************ 00:05:48.921 END TEST rpc_plugins 00:05:48.921 ************************************ 00:05:48.921 05:07:42 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:48.921 00:05:48.921 real 0m0.119s 00:05:48.921 user 0m0.063s 00:05:48.921 sys 0m0.017s 00:05:48.921 05:07:42 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:48.921 05:07:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:48.921 05:07:42 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:48.921 05:07:42 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:48.921 05:07:42 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:48.921 05:07:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.921 ************************************ 00:05:48.921 START TEST rpc_trace_cmd_test 00:05:48.921 ************************************ 00:05:48.921 05:07:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:05:48.921 05:07:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:48.921 05:07:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:48.921 05:07:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.921 05:07:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:48.921 05:07:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.921 05:07:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:48.921 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid69799", 00:05:48.921 "tpoint_group_mask": "0x8", 00:05:48.921 "iscsi_conn": { 00:05:48.921 "mask": "0x2", 00:05:48.921 "tpoint_mask": "0x0" 00:05:48.921 }, 00:05:48.921 "scsi": { 00:05:48.921 "mask": "0x4", 00:05:48.921 "tpoint_mask": "0x0" 00:05:48.921 }, 00:05:48.921 "bdev": { 00:05:48.921 "mask": "0x8", 00:05:48.921 "tpoint_mask": "0xffffffffffffffff" 00:05:48.921 }, 00:05:48.921 "nvmf_rdma": { 00:05:48.921 "mask": "0x10", 00:05:48.921 "tpoint_mask": "0x0" 00:05:48.921 }, 00:05:48.921 "nvmf_tcp": { 00:05:48.921 "mask": "0x20", 00:05:48.921 "tpoint_mask": "0x0" 00:05:48.921 }, 00:05:48.921 "ftl": { 00:05:48.921 "mask": "0x40", 00:05:48.921 "tpoint_mask": "0x0" 00:05:48.921 }, 00:05:48.921 "blobfs": { 00:05:48.921 "mask": "0x80", 00:05:48.921 "tpoint_mask": "0x0" 00:05:48.921 }, 00:05:48.921 "dsa": { 00:05:48.921 "mask": "0x200", 00:05:48.921 "tpoint_mask": "0x0" 00:05:48.921 }, 00:05:48.921 "thread": { 00:05:48.921 "mask": "0x400", 00:05:48.921 "tpoint_mask": "0x0" 00:05:48.921 }, 00:05:48.921 "nvme_pcie": { 00:05:48.921 "mask": "0x800", 00:05:48.921 "tpoint_mask": "0x0" 00:05:48.921 }, 00:05:48.921 "iaa": { 00:05:48.921 "mask": "0x1000", 00:05:48.921 "tpoint_mask": "0x0" 00:05:48.921 }, 00:05:48.921 "nvme_tcp": { 00:05:48.921 "mask": "0x2000", 00:05:48.921 "tpoint_mask": "0x0" 00:05:48.921 }, 00:05:48.921 "bdev_nvme": { 00:05:48.921 "mask": "0x4000", 00:05:48.921 "tpoint_mask": "0x0" 00:05:48.921 }, 00:05:48.921 "sock": { 00:05:48.921 "mask": "0x8000", 00:05:48.921 "tpoint_mask": "0x0" 00:05:48.921 }, 00:05:48.921 "blob": { 00:05:48.921 "mask": "0x10000", 00:05:48.921 "tpoint_mask": "0x0" 00:05:48.921 }, 00:05:48.921 "bdev_raid": { 00:05:48.921 "mask": "0x20000", 00:05:48.921 "tpoint_mask": "0x0" 00:05:48.921 } 00:05:48.921 }' 00:05:48.921 05:07:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:49.182 05:07:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 18 -gt 2 ']' 00:05:49.182 05:07:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:49.182 05:07:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:49.182 05:07:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:49.182 05:07:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:49.182 05:07:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:49.182 05:07:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:49.182 05:07:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:49.182 ************************************ 00:05:49.182 END TEST rpc_trace_cmd_test 00:05:49.182 ************************************ 00:05:49.182 05:07:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:49.182 00:05:49.182 real 0m0.182s 00:05:49.182 user 0m0.152s 00:05:49.182 sys 0m0.021s 00:05:49.182 05:07:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:49.182 05:07:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:49.182 05:07:42 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:49.182 05:07:42 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:49.182 05:07:42 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:49.182 05:07:42 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:49.182 05:07:42 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:49.182 05:07:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.182 ************************************ 00:05:49.182 START TEST rpc_daemon_integrity 00:05:49.182 ************************************ 00:05:49.182 05:07:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:49.182 05:07:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:49.182 05:07:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:49.182 05:07:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.182 05:07:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:49.182 05:07:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:49.182 05:07:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:49.182 05:07:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:49.182 05:07:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:49.182 05:07:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:49.182 05:07:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.182 05:07:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:49.182 05:07:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:49.182 05:07:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:49.182 05:07:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:49.182 05:07:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.182 05:07:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:49.182 05:07:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:49.182 { 00:05:49.182 "name": "Malloc2", 00:05:49.182 "aliases": [ 00:05:49.182 "f85d7978-d5f3-49e6-a67e-f8244d73b151" 00:05:49.182 ], 00:05:49.182 "product_name": "Malloc disk", 00:05:49.182 "block_size": 512, 00:05:49.182 "num_blocks": 16384, 00:05:49.182 "uuid": "f85d7978-d5f3-49e6-a67e-f8244d73b151", 00:05:49.182 "assigned_rate_limits": { 00:05:49.182 "rw_ios_per_sec": 0, 00:05:49.182 "rw_mbytes_per_sec": 0, 00:05:49.182 "r_mbytes_per_sec": 0, 00:05:49.182 "w_mbytes_per_sec": 0 00:05:49.182 }, 00:05:49.182 "claimed": false, 00:05:49.182 "zoned": false, 00:05:49.182 "supported_io_types": { 00:05:49.182 "read": true, 00:05:49.182 "write": true, 00:05:49.182 "unmap": true, 00:05:49.182 "flush": true, 00:05:49.182 "reset": true, 00:05:49.182 "nvme_admin": false, 00:05:49.182 "nvme_io": false, 00:05:49.182 "nvme_io_md": false, 00:05:49.182 "write_zeroes": true, 00:05:49.182 "zcopy": true, 00:05:49.182 "get_zone_info": false, 00:05:49.182 "zone_management": false, 00:05:49.182 "zone_append": false, 00:05:49.182 "compare": false, 00:05:49.182 "compare_and_write": false, 00:05:49.182 "abort": true, 00:05:49.182 "seek_hole": false, 00:05:49.182 "seek_data": false, 00:05:49.182 "copy": true, 00:05:49.182 "nvme_iov_md": false 00:05:49.182 }, 00:05:49.182 "memory_domains": [ 00:05:49.182 { 00:05:49.182 "dma_device_id": "system", 00:05:49.182 "dma_device_type": 1 00:05:49.182 }, 00:05:49.182 { 00:05:49.182 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:49.182 "dma_device_type": 2 00:05:49.182 } 00:05:49.182 ], 00:05:49.182 "driver_specific": {} 00:05:49.182 } 00:05:49.182 ]' 00:05:49.182 05:07:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:49.443 05:07:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:49.443 05:07:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:49.443 05:07:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:49.443 05:07:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.443 [2024-11-10 05:07:42.443848] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:49.443 [2024-11-10 05:07:42.443900] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:49.443 [2024-11-10 05:07:42.443921] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:05:49.443 [2024-11-10 05:07:42.443930] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:49.443 [2024-11-10 05:07:42.446036] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:49.443 [2024-11-10 05:07:42.446067] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:49.443 Passthru0 00:05:49.443 05:07:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:49.443 05:07:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:49.443 05:07:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:49.443 05:07:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.443 05:07:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:49.443 05:07:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:49.443 { 00:05:49.443 "name": "Malloc2", 00:05:49.443 "aliases": [ 00:05:49.443 "f85d7978-d5f3-49e6-a67e-f8244d73b151" 00:05:49.443 ], 00:05:49.443 "product_name": "Malloc disk", 00:05:49.443 "block_size": 512, 00:05:49.443 "num_blocks": 16384, 00:05:49.443 "uuid": "f85d7978-d5f3-49e6-a67e-f8244d73b151", 00:05:49.443 "assigned_rate_limits": { 00:05:49.443 "rw_ios_per_sec": 0, 00:05:49.443 "rw_mbytes_per_sec": 0, 00:05:49.443 "r_mbytes_per_sec": 0, 00:05:49.443 "w_mbytes_per_sec": 0 00:05:49.443 }, 00:05:49.443 "claimed": true, 00:05:49.443 "claim_type": "exclusive_write", 00:05:49.443 "zoned": false, 00:05:49.443 "supported_io_types": { 00:05:49.443 "read": true, 00:05:49.443 "write": true, 00:05:49.443 "unmap": true, 00:05:49.443 "flush": true, 00:05:49.443 "reset": true, 00:05:49.443 "nvme_admin": false, 00:05:49.443 "nvme_io": false, 00:05:49.443 "nvme_io_md": false, 00:05:49.443 "write_zeroes": true, 00:05:49.443 "zcopy": true, 00:05:49.443 "get_zone_info": false, 00:05:49.443 "zone_management": false, 00:05:49.443 "zone_append": false, 00:05:49.443 "compare": false, 00:05:49.443 "compare_and_write": false, 00:05:49.443 "abort": true, 00:05:49.443 "seek_hole": false, 00:05:49.443 "seek_data": false, 00:05:49.443 "copy": true, 00:05:49.443 "nvme_iov_md": false 00:05:49.443 }, 00:05:49.443 "memory_domains": [ 00:05:49.443 { 00:05:49.443 "dma_device_id": "system", 00:05:49.443 "dma_device_type": 1 00:05:49.443 }, 00:05:49.443 { 00:05:49.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:49.443 "dma_device_type": 2 00:05:49.443 } 00:05:49.443 ], 00:05:49.443 "driver_specific": {} 00:05:49.443 }, 00:05:49.443 { 00:05:49.443 "name": "Passthru0", 00:05:49.443 "aliases": [ 00:05:49.443 "ea722701-83f0-533e-aeea-fe5b2eb29508" 00:05:49.443 ], 00:05:49.443 "product_name": "passthru", 00:05:49.443 "block_size": 512, 00:05:49.443 "num_blocks": 16384, 00:05:49.443 "uuid": "ea722701-83f0-533e-aeea-fe5b2eb29508", 00:05:49.443 "assigned_rate_limits": { 00:05:49.443 "rw_ios_per_sec": 0, 00:05:49.443 "rw_mbytes_per_sec": 0, 00:05:49.443 "r_mbytes_per_sec": 0, 00:05:49.443 "w_mbytes_per_sec": 0 00:05:49.443 }, 00:05:49.443 "claimed": false, 00:05:49.443 "zoned": false, 00:05:49.443 "supported_io_types": { 00:05:49.443 "read": true, 00:05:49.443 "write": true, 00:05:49.443 "unmap": true, 00:05:49.443 "flush": true, 00:05:49.443 "reset": true, 00:05:49.443 "nvme_admin": false, 00:05:49.443 "nvme_io": false, 00:05:49.443 "nvme_io_md": false, 00:05:49.443 "write_zeroes": true, 00:05:49.443 "zcopy": true, 00:05:49.443 "get_zone_info": false, 00:05:49.443 "zone_management": false, 00:05:49.443 "zone_append": false, 00:05:49.443 "compare": false, 00:05:49.443 "compare_and_write": false, 00:05:49.443 "abort": true, 00:05:49.443 "seek_hole": false, 00:05:49.443 "seek_data": false, 00:05:49.443 "copy": true, 00:05:49.443 "nvme_iov_md": false 00:05:49.443 }, 00:05:49.443 "memory_domains": [ 00:05:49.443 { 00:05:49.443 "dma_device_id": "system", 00:05:49.443 "dma_device_type": 1 00:05:49.443 }, 00:05:49.443 { 00:05:49.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:49.443 "dma_device_type": 2 00:05:49.443 } 00:05:49.443 ], 00:05:49.443 "driver_specific": { 00:05:49.443 "passthru": { 00:05:49.443 "name": "Passthru0", 00:05:49.443 "base_bdev_name": "Malloc2" 00:05:49.443 } 00:05:49.443 } 00:05:49.443 } 00:05:49.443 ]' 00:05:49.443 05:07:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:49.443 05:07:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:49.443 05:07:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:49.443 05:07:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:49.443 05:07:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.443 05:07:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:49.443 05:07:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:49.443 05:07:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:49.443 05:07:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.443 05:07:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:49.443 05:07:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:49.443 05:07:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:49.443 05:07:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.443 05:07:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:49.444 05:07:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:49.444 05:07:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:49.444 ************************************ 00:05:49.444 END TEST rpc_daemon_integrity 00:05:49.444 ************************************ 00:05:49.444 05:07:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:49.444 00:05:49.444 real 0m0.215s 00:05:49.444 user 0m0.118s 00:05:49.444 sys 0m0.041s 00:05:49.444 05:07:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:49.444 05:07:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.444 05:07:42 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:49.444 05:07:42 rpc -- rpc/rpc.sh@84 -- # killprocess 69799 00:05:49.444 05:07:42 rpc -- common/autotest_common.sh@950 -- # '[' -z 69799 ']' 00:05:49.444 05:07:42 rpc -- common/autotest_common.sh@954 -- # kill -0 69799 00:05:49.444 05:07:42 rpc -- common/autotest_common.sh@955 -- # uname 00:05:49.444 05:07:42 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:49.444 05:07:42 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69799 00:05:49.444 killing process with pid 69799 00:05:49.444 05:07:42 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:49.444 05:07:42 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:49.444 05:07:42 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69799' 00:05:49.444 05:07:42 rpc -- common/autotest_common.sh@969 -- # kill 69799 00:05:49.444 05:07:42 rpc -- common/autotest_common.sh@974 -- # wait 69799 00:05:49.704 ************************************ 00:05:49.704 END TEST rpc 00:05:49.704 ************************************ 00:05:49.704 00:05:49.704 real 0m2.230s 00:05:49.704 user 0m2.596s 00:05:49.704 sys 0m0.644s 00:05:49.704 05:07:42 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:49.704 05:07:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.704 05:07:42 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:49.704 05:07:42 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:49.704 05:07:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:49.704 05:07:42 -- common/autotest_common.sh@10 -- # set +x 00:05:49.704 ************************************ 00:05:49.704 START TEST skip_rpc 00:05:49.704 ************************************ 00:05:49.704 05:07:42 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:49.965 * Looking for test storage... 00:05:49.965 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:49.965 05:07:42 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:49.965 05:07:42 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:49.965 05:07:42 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:49.965 05:07:43 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:49.965 05:07:43 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:49.965 05:07:43 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:49.965 05:07:43 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:49.965 05:07:43 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:49.965 05:07:43 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:49.965 05:07:43 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:49.965 05:07:43 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:49.965 05:07:43 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:49.965 05:07:43 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:49.965 05:07:43 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:49.965 05:07:43 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:49.965 05:07:43 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:49.965 05:07:43 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:49.965 05:07:43 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:49.965 05:07:43 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:49.965 05:07:43 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:49.965 05:07:43 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:49.965 05:07:43 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:49.965 05:07:43 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:49.965 05:07:43 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:49.965 05:07:43 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:49.965 05:07:43 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:49.965 05:07:43 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:49.965 05:07:43 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:49.965 05:07:43 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:49.965 05:07:43 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:49.965 05:07:43 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:49.965 05:07:43 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:49.965 05:07:43 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:49.965 05:07:43 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:49.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.965 --rc genhtml_branch_coverage=1 00:05:49.965 --rc genhtml_function_coverage=1 00:05:49.965 --rc genhtml_legend=1 00:05:49.965 --rc geninfo_all_blocks=1 00:05:49.965 --rc geninfo_unexecuted_blocks=1 00:05:49.965 00:05:49.965 ' 00:05:49.965 05:07:43 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:49.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.965 --rc genhtml_branch_coverage=1 00:05:49.965 --rc genhtml_function_coverage=1 00:05:49.965 --rc genhtml_legend=1 00:05:49.965 --rc geninfo_all_blocks=1 00:05:49.965 --rc geninfo_unexecuted_blocks=1 00:05:49.965 00:05:49.965 ' 00:05:49.965 05:07:43 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:49.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.965 --rc genhtml_branch_coverage=1 00:05:49.965 --rc genhtml_function_coverage=1 00:05:49.965 --rc genhtml_legend=1 00:05:49.965 --rc geninfo_all_blocks=1 00:05:49.965 --rc geninfo_unexecuted_blocks=1 00:05:49.965 00:05:49.965 ' 00:05:49.965 05:07:43 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:49.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.965 --rc genhtml_branch_coverage=1 00:05:49.965 --rc genhtml_function_coverage=1 00:05:49.965 --rc genhtml_legend=1 00:05:49.965 --rc geninfo_all_blocks=1 00:05:49.965 --rc geninfo_unexecuted_blocks=1 00:05:49.965 00:05:49.965 ' 00:05:49.965 05:07:43 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:49.965 05:07:43 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:49.965 05:07:43 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:49.965 05:07:43 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:49.965 05:07:43 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:49.965 05:07:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.965 ************************************ 00:05:49.965 START TEST skip_rpc 00:05:49.965 ************************************ 00:05:49.965 05:07:43 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:05:49.965 05:07:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=69995 00:05:49.965 05:07:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:49.965 05:07:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:49.965 05:07:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:49.965 [2024-11-10 05:07:43.125194] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:49.965 [2024-11-10 05:07:43.125442] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69995 ] 00:05:50.227 [2024-11-10 05:07:43.271987] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.227 [2024-11-10 05:07:43.304066] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.522 05:07:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:55.522 05:07:48 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:55.522 05:07:48 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:55.522 05:07:48 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:55.522 05:07:48 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:55.522 05:07:48 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:55.522 05:07:48 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:55.522 05:07:48 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:05:55.522 05:07:48 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.522 05:07:48 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.522 05:07:48 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:55.522 05:07:48 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:55.522 05:07:48 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:55.522 05:07:48 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:55.522 05:07:48 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:55.522 05:07:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:55.522 05:07:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 69995 00:05:55.522 05:07:48 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 69995 ']' 00:05:55.522 05:07:48 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 69995 00:05:55.522 05:07:48 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:05:55.522 05:07:48 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:55.522 05:07:48 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69995 00:05:55.522 05:07:48 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:55.522 05:07:48 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:55.522 05:07:48 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69995' 00:05:55.522 killing process with pid 69995 00:05:55.522 05:07:48 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 69995 00:05:55.522 05:07:48 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 69995 00:05:55.522 00:05:55.522 real 0m5.270s 00:05:55.522 ************************************ 00:05:55.522 END TEST skip_rpc 00:05:55.522 ************************************ 00:05:55.522 user 0m4.933s 00:05:55.522 sys 0m0.235s 00:05:55.522 05:07:48 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:55.522 05:07:48 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.522 05:07:48 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:55.522 05:07:48 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:55.522 05:07:48 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:55.522 05:07:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.522 ************************************ 00:05:55.522 START TEST skip_rpc_with_json 00:05:55.522 ************************************ 00:05:55.522 05:07:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:05:55.522 05:07:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:55.522 05:07:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=70077 00:05:55.522 05:07:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:55.522 05:07:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 70077 00:05:55.522 05:07:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 70077 ']' 00:05:55.522 05:07:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.522 05:07:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:55.522 05:07:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.522 05:07:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:55.522 05:07:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:55.523 05:07:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:55.523 [2024-11-10 05:07:48.431064] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:55.523 [2024-11-10 05:07:48.431175] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70077 ] 00:05:55.523 [2024-11-10 05:07:48.576957] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.523 [2024-11-10 05:07:48.605535] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.092 05:07:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:56.092 05:07:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:05:56.092 05:07:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:56.092 05:07:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.092 05:07:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:56.092 [2024-11-10 05:07:49.265304] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:56.092 request: 00:05:56.092 { 00:05:56.092 "trtype": "tcp", 00:05:56.092 "method": "nvmf_get_transports", 00:05:56.092 "req_id": 1 00:05:56.092 } 00:05:56.092 Got JSON-RPC error response 00:05:56.092 response: 00:05:56.092 { 00:05:56.092 "code": -19, 00:05:56.092 "message": "No such device" 00:05:56.092 } 00:05:56.092 05:07:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:56.092 05:07:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:56.092 05:07:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.092 05:07:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:56.092 [2024-11-10 05:07:49.273396] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:56.092 05:07:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.092 05:07:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:56.092 05:07:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.092 05:07:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:56.353 05:07:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.353 05:07:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:56.353 { 00:05:56.353 "subsystems": [ 00:05:56.353 { 00:05:56.353 "subsystem": "fsdev", 00:05:56.353 "config": [ 00:05:56.353 { 00:05:56.353 "method": "fsdev_set_opts", 00:05:56.353 "params": { 00:05:56.353 "fsdev_io_pool_size": 65535, 00:05:56.353 "fsdev_io_cache_size": 256 00:05:56.353 } 00:05:56.353 } 00:05:56.353 ] 00:05:56.353 }, 00:05:56.354 { 00:05:56.354 "subsystem": "keyring", 00:05:56.354 "config": [] 00:05:56.354 }, 00:05:56.354 { 00:05:56.354 "subsystem": "iobuf", 00:05:56.354 "config": [ 00:05:56.354 { 00:05:56.354 "method": "iobuf_set_options", 00:05:56.354 "params": { 00:05:56.354 "small_pool_count": 8192, 00:05:56.354 "large_pool_count": 1024, 00:05:56.354 "small_bufsize": 8192, 00:05:56.354 "large_bufsize": 135168 00:05:56.354 } 00:05:56.354 } 00:05:56.354 ] 00:05:56.354 }, 00:05:56.354 { 00:05:56.354 "subsystem": "sock", 00:05:56.354 "config": [ 00:05:56.354 { 00:05:56.354 "method": "sock_set_default_impl", 00:05:56.354 "params": { 00:05:56.354 "impl_name": "posix" 00:05:56.354 } 00:05:56.354 }, 00:05:56.354 { 00:05:56.354 "method": "sock_impl_set_options", 00:05:56.354 "params": { 00:05:56.354 "impl_name": "ssl", 00:05:56.354 "recv_buf_size": 4096, 00:05:56.354 "send_buf_size": 4096, 00:05:56.354 "enable_recv_pipe": true, 00:05:56.354 "enable_quickack": false, 00:05:56.354 "enable_placement_id": 0, 00:05:56.354 "enable_zerocopy_send_server": true, 00:05:56.354 "enable_zerocopy_send_client": false, 00:05:56.354 "zerocopy_threshold": 0, 00:05:56.354 "tls_version": 0, 00:05:56.354 "enable_ktls": false 00:05:56.354 } 00:05:56.354 }, 00:05:56.354 { 00:05:56.354 "method": "sock_impl_set_options", 00:05:56.354 "params": { 00:05:56.354 "impl_name": "posix", 00:05:56.354 "recv_buf_size": 2097152, 00:05:56.354 "send_buf_size": 2097152, 00:05:56.354 "enable_recv_pipe": true, 00:05:56.354 "enable_quickack": false, 00:05:56.354 "enable_placement_id": 0, 00:05:56.354 "enable_zerocopy_send_server": true, 00:05:56.354 "enable_zerocopy_send_client": false, 00:05:56.354 "zerocopy_threshold": 0, 00:05:56.354 "tls_version": 0, 00:05:56.354 "enable_ktls": false 00:05:56.354 } 00:05:56.354 } 00:05:56.354 ] 00:05:56.354 }, 00:05:56.354 { 00:05:56.354 "subsystem": "vmd", 00:05:56.354 "config": [] 00:05:56.354 }, 00:05:56.354 { 00:05:56.354 "subsystem": "accel", 00:05:56.354 "config": [ 00:05:56.354 { 00:05:56.354 "method": "accel_set_options", 00:05:56.354 "params": { 00:05:56.354 "small_cache_size": 128, 00:05:56.354 "large_cache_size": 16, 00:05:56.354 "task_count": 2048, 00:05:56.354 "sequence_count": 2048, 00:05:56.354 "buf_count": 2048 00:05:56.354 } 00:05:56.354 } 00:05:56.354 ] 00:05:56.354 }, 00:05:56.354 { 00:05:56.354 "subsystem": "bdev", 00:05:56.354 "config": [ 00:05:56.354 { 00:05:56.354 "method": "bdev_set_options", 00:05:56.354 "params": { 00:05:56.354 "bdev_io_pool_size": 65535, 00:05:56.354 "bdev_io_cache_size": 256, 00:05:56.354 "bdev_auto_examine": true, 00:05:56.354 "iobuf_small_cache_size": 128, 00:05:56.354 "iobuf_large_cache_size": 16 00:05:56.354 } 00:05:56.354 }, 00:05:56.354 { 00:05:56.354 "method": "bdev_raid_set_options", 00:05:56.354 "params": { 00:05:56.354 "process_window_size_kb": 1024, 00:05:56.354 "process_max_bandwidth_mb_sec": 0 00:05:56.354 } 00:05:56.354 }, 00:05:56.354 { 00:05:56.354 "method": "bdev_iscsi_set_options", 00:05:56.354 "params": { 00:05:56.354 "timeout_sec": 30 00:05:56.354 } 00:05:56.354 }, 00:05:56.354 { 00:05:56.354 "method": "bdev_nvme_set_options", 00:05:56.354 "params": { 00:05:56.354 "action_on_timeout": "none", 00:05:56.354 "timeout_us": 0, 00:05:56.354 "timeout_admin_us": 0, 00:05:56.354 "keep_alive_timeout_ms": 10000, 00:05:56.354 "arbitration_burst": 0, 00:05:56.354 "low_priority_weight": 0, 00:05:56.354 "medium_priority_weight": 0, 00:05:56.354 "high_priority_weight": 0, 00:05:56.354 "nvme_adminq_poll_period_us": 10000, 00:05:56.354 "nvme_ioq_poll_period_us": 0, 00:05:56.354 "io_queue_requests": 0, 00:05:56.354 "delay_cmd_submit": true, 00:05:56.354 "transport_retry_count": 4, 00:05:56.354 "bdev_retry_count": 3, 00:05:56.354 "transport_ack_timeout": 0, 00:05:56.354 "ctrlr_loss_timeout_sec": 0, 00:05:56.354 "reconnect_delay_sec": 0, 00:05:56.354 "fast_io_fail_timeout_sec": 0, 00:05:56.354 "disable_auto_failback": false, 00:05:56.354 "generate_uuids": false, 00:05:56.354 "transport_tos": 0, 00:05:56.354 "nvme_error_stat": false, 00:05:56.354 "rdma_srq_size": 0, 00:05:56.354 "io_path_stat": false, 00:05:56.354 "allow_accel_sequence": false, 00:05:56.354 "rdma_max_cq_size": 0, 00:05:56.354 "rdma_cm_event_timeout_ms": 0, 00:05:56.354 "dhchap_digests": [ 00:05:56.354 "sha256", 00:05:56.354 "sha384", 00:05:56.354 "sha512" 00:05:56.354 ], 00:05:56.354 "dhchap_dhgroups": [ 00:05:56.354 "null", 00:05:56.354 "ffdhe2048", 00:05:56.354 "ffdhe3072", 00:05:56.354 "ffdhe4096", 00:05:56.354 "ffdhe6144", 00:05:56.354 "ffdhe8192" 00:05:56.354 ] 00:05:56.354 } 00:05:56.354 }, 00:05:56.354 { 00:05:56.354 "method": "bdev_nvme_set_hotplug", 00:05:56.354 "params": { 00:05:56.354 "period_us": 100000, 00:05:56.354 "enable": false 00:05:56.354 } 00:05:56.354 }, 00:05:56.354 { 00:05:56.354 "method": "bdev_wait_for_examine" 00:05:56.354 } 00:05:56.354 ] 00:05:56.354 }, 00:05:56.354 { 00:05:56.354 "subsystem": "scsi", 00:05:56.354 "config": null 00:05:56.354 }, 00:05:56.354 { 00:05:56.354 "subsystem": "scheduler", 00:05:56.354 "config": [ 00:05:56.354 { 00:05:56.354 "method": "framework_set_scheduler", 00:05:56.354 "params": { 00:05:56.354 "name": "static" 00:05:56.354 } 00:05:56.354 } 00:05:56.354 ] 00:05:56.354 }, 00:05:56.354 { 00:05:56.354 "subsystem": "vhost_scsi", 00:05:56.354 "config": [] 00:05:56.354 }, 00:05:56.354 { 00:05:56.354 "subsystem": "vhost_blk", 00:05:56.354 "config": [] 00:05:56.354 }, 00:05:56.354 { 00:05:56.354 "subsystem": "ublk", 00:05:56.354 "config": [] 00:05:56.354 }, 00:05:56.354 { 00:05:56.354 "subsystem": "nbd", 00:05:56.354 "config": [] 00:05:56.354 }, 00:05:56.354 { 00:05:56.354 "subsystem": "nvmf", 00:05:56.354 "config": [ 00:05:56.354 { 00:05:56.354 "method": "nvmf_set_config", 00:05:56.354 "params": { 00:05:56.354 "discovery_filter": "match_any", 00:05:56.354 "admin_cmd_passthru": { 00:05:56.354 "identify_ctrlr": false 00:05:56.354 }, 00:05:56.354 "dhchap_digests": [ 00:05:56.354 "sha256", 00:05:56.354 "sha384", 00:05:56.354 "sha512" 00:05:56.354 ], 00:05:56.354 "dhchap_dhgroups": [ 00:05:56.354 "null", 00:05:56.354 "ffdhe2048", 00:05:56.354 "ffdhe3072", 00:05:56.354 "ffdhe4096", 00:05:56.354 "ffdhe6144", 00:05:56.354 "ffdhe8192" 00:05:56.354 ] 00:05:56.354 } 00:05:56.354 }, 00:05:56.354 { 00:05:56.354 "method": "nvmf_set_max_subsystems", 00:05:56.354 "params": { 00:05:56.354 "max_subsystems": 1024 00:05:56.354 } 00:05:56.354 }, 00:05:56.354 { 00:05:56.354 "method": "nvmf_set_crdt", 00:05:56.354 "params": { 00:05:56.354 "crdt1": 0, 00:05:56.354 "crdt2": 0, 00:05:56.354 "crdt3": 0 00:05:56.354 } 00:05:56.354 }, 00:05:56.354 { 00:05:56.354 "method": "nvmf_create_transport", 00:05:56.354 "params": { 00:05:56.354 "trtype": "TCP", 00:05:56.354 "max_queue_depth": 128, 00:05:56.354 "max_io_qpairs_per_ctrlr": 127, 00:05:56.354 "in_capsule_data_size": 4096, 00:05:56.354 "max_io_size": 131072, 00:05:56.354 "io_unit_size": 131072, 00:05:56.354 "max_aq_depth": 128, 00:05:56.354 "num_shared_buffers": 511, 00:05:56.354 "buf_cache_size": 4294967295, 00:05:56.354 "dif_insert_or_strip": false, 00:05:56.354 "zcopy": false, 00:05:56.354 "c2h_success": true, 00:05:56.354 "sock_priority": 0, 00:05:56.354 "abort_timeout_sec": 1, 00:05:56.354 "ack_timeout": 0, 00:05:56.354 "data_wr_pool_size": 0 00:05:56.354 } 00:05:56.354 } 00:05:56.354 ] 00:05:56.354 }, 00:05:56.354 { 00:05:56.354 "subsystem": "iscsi", 00:05:56.354 "config": [ 00:05:56.354 { 00:05:56.354 "method": "iscsi_set_options", 00:05:56.354 "params": { 00:05:56.354 "node_base": "iqn.2016-06.io.spdk", 00:05:56.354 "max_sessions": 128, 00:05:56.354 "max_connections_per_session": 2, 00:05:56.354 "max_queue_depth": 64, 00:05:56.355 "default_time2wait": 2, 00:05:56.355 "default_time2retain": 20, 00:05:56.355 "first_burst_length": 8192, 00:05:56.355 "immediate_data": true, 00:05:56.355 "allow_duplicated_isid": false, 00:05:56.355 "error_recovery_level": 0, 00:05:56.355 "nop_timeout": 60, 00:05:56.355 "nop_in_interval": 30, 00:05:56.355 "disable_chap": false, 00:05:56.355 "require_chap": false, 00:05:56.355 "mutual_chap": false, 00:05:56.355 "chap_group": 0, 00:05:56.355 "max_large_datain_per_connection": 64, 00:05:56.355 "max_r2t_per_connection": 4, 00:05:56.355 "pdu_pool_size": 36864, 00:05:56.355 "immediate_data_pool_size": 16384, 00:05:56.355 "data_out_pool_size": 2048 00:05:56.355 } 00:05:56.355 } 00:05:56.355 ] 00:05:56.355 } 00:05:56.355 ] 00:05:56.355 } 00:05:56.355 05:07:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:56.355 05:07:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 70077 00:05:56.355 05:07:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 70077 ']' 00:05:56.355 05:07:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 70077 00:05:56.355 05:07:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:56.355 05:07:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:56.355 05:07:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70077 00:05:56.355 killing process with pid 70077 00:05:56.355 05:07:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:56.355 05:07:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:56.355 05:07:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70077' 00:05:56.355 05:07:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 70077 00:05:56.355 05:07:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 70077 00:05:56.616 05:07:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=70106 00:05:56.616 05:07:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:56.616 05:07:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:01.900 05:07:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 70106 00:06:01.900 05:07:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 70106 ']' 00:06:01.900 05:07:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 70106 00:06:01.900 05:07:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:01.900 05:07:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:01.900 05:07:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70106 00:06:01.900 killing process with pid 70106 00:06:01.900 05:07:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:01.900 05:07:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:01.901 05:07:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70106' 00:06:01.901 05:07:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 70106 00:06:01.901 05:07:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 70106 00:06:01.901 05:07:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:01.901 05:07:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:01.901 ************************************ 00:06:01.901 END TEST skip_rpc_with_json 00:06:01.901 ************************************ 00:06:01.901 00:06:01.901 real 0m6.594s 00:06:01.901 user 0m6.300s 00:06:01.901 sys 0m0.513s 00:06:01.901 05:07:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:01.901 05:07:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:01.901 05:07:54 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:01.901 05:07:54 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:01.901 05:07:54 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:01.901 05:07:54 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.901 ************************************ 00:06:01.901 START TEST skip_rpc_with_delay 00:06:01.901 ************************************ 00:06:01.901 05:07:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:06:01.901 05:07:55 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:01.901 05:07:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:06:01.901 05:07:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:01.901 05:07:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:01.901 05:07:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:01.901 05:07:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:01.901 05:07:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:01.901 05:07:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:01.901 05:07:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:01.901 05:07:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:01.901 05:07:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:01.901 05:07:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:01.901 [2024-11-10 05:07:55.074018] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:01.901 [2024-11-10 05:07:55.074114] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:01.901 05:07:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:06:01.901 05:07:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:01.901 ************************************ 00:06:01.901 END TEST skip_rpc_with_delay 00:06:01.901 ************************************ 00:06:01.901 05:07:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:01.901 05:07:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:01.901 00:06:01.901 real 0m0.103s 00:06:01.901 user 0m0.057s 00:06:01.901 sys 0m0.044s 00:06:01.901 05:07:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:01.901 05:07:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:02.161 05:07:55 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:02.161 05:07:55 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:02.161 05:07:55 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:02.161 05:07:55 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:02.161 05:07:55 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:02.161 05:07:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.161 ************************************ 00:06:02.161 START TEST exit_on_failed_rpc_init 00:06:02.161 ************************************ 00:06:02.161 05:07:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:06:02.161 05:07:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=70217 00:06:02.161 05:07:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 70217 00:06:02.161 05:07:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 70217 ']' 00:06:02.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.161 05:07:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.161 05:07:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:02.161 05:07:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:02.161 05:07:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.161 05:07:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:02.161 05:07:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:02.162 [2024-11-10 05:07:55.227783] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:02.162 [2024-11-10 05:07:55.228012] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70217 ] 00:06:02.162 [2024-11-10 05:07:55.366155] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.423 [2024-11-10 05:07:55.397078] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.995 05:07:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:02.996 05:07:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:06:02.996 05:07:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:02.996 05:07:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:02.996 05:07:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:06:02.996 05:07:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:02.996 05:07:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:02.996 05:07:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:02.996 05:07:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:02.996 05:07:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:02.996 05:07:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:02.996 05:07:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:02.996 05:07:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:02.996 05:07:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:02.996 05:07:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:02.996 [2024-11-10 05:07:56.143047] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:02.996 [2024-11-10 05:07:56.143161] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70235 ] 00:06:03.256 [2024-11-10 05:07:56.284187] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.256 [2024-11-10 05:07:56.315699] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.256 [2024-11-10 05:07:56.315787] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:03.256 [2024-11-10 05:07:56.315803] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:03.256 [2024-11-10 05:07:56.315813] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:03.256 05:07:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:06:03.256 05:07:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:03.256 05:07:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:06:03.256 05:07:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:06:03.256 05:07:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:06:03.256 05:07:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:03.256 05:07:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:03.256 05:07:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 70217 00:06:03.256 05:07:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 70217 ']' 00:06:03.256 05:07:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 70217 00:06:03.256 05:07:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:06:03.256 05:07:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:03.256 05:07:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70217 00:06:03.256 killing process with pid 70217 00:06:03.256 05:07:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:03.256 05:07:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:03.256 05:07:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70217' 00:06:03.256 05:07:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 70217 00:06:03.256 05:07:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 70217 00:06:03.515 ************************************ 00:06:03.515 END TEST exit_on_failed_rpc_init 00:06:03.515 ************************************ 00:06:03.515 00:06:03.515 real 0m1.490s 00:06:03.515 user 0m1.664s 00:06:03.515 sys 0m0.352s 00:06:03.515 05:07:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:03.515 05:07:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:03.515 05:07:56 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:03.515 ************************************ 00:06:03.515 END TEST skip_rpc 00:06:03.515 ************************************ 00:06:03.515 00:06:03.515 real 0m13.792s 00:06:03.515 user 0m13.090s 00:06:03.515 sys 0m1.323s 00:06:03.515 05:07:56 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:03.515 05:07:56 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.515 05:07:56 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:03.515 05:07:56 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:03.515 05:07:56 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:03.515 05:07:56 -- common/autotest_common.sh@10 -- # set +x 00:06:03.515 ************************************ 00:06:03.515 START TEST rpc_client 00:06:03.515 ************************************ 00:06:03.515 05:07:56 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:03.774 * Looking for test storage... 00:06:03.774 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:03.774 05:07:56 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:03.774 05:07:56 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:03.774 05:07:56 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:06:03.774 05:07:56 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:03.774 05:07:56 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:03.774 05:07:56 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:03.774 05:07:56 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:03.774 05:07:56 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:03.774 05:07:56 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:03.774 05:07:56 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:03.774 05:07:56 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:03.774 05:07:56 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:03.774 05:07:56 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:03.774 05:07:56 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:03.774 05:07:56 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:03.774 05:07:56 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:03.774 05:07:56 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:03.774 05:07:56 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:03.774 05:07:56 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:03.774 05:07:56 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:03.774 05:07:56 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:03.774 05:07:56 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:03.774 05:07:56 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:03.774 05:07:56 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:03.774 05:07:56 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:03.774 05:07:56 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:03.774 05:07:56 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:03.774 05:07:56 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:03.774 05:07:56 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:03.774 05:07:56 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:03.774 05:07:56 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:03.774 05:07:56 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:03.774 05:07:56 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:03.774 05:07:56 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:03.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.774 --rc genhtml_branch_coverage=1 00:06:03.774 --rc genhtml_function_coverage=1 00:06:03.774 --rc genhtml_legend=1 00:06:03.774 --rc geninfo_all_blocks=1 00:06:03.774 --rc geninfo_unexecuted_blocks=1 00:06:03.774 00:06:03.774 ' 00:06:03.774 05:07:56 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:03.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.774 --rc genhtml_branch_coverage=1 00:06:03.774 --rc genhtml_function_coverage=1 00:06:03.774 --rc genhtml_legend=1 00:06:03.774 --rc geninfo_all_blocks=1 00:06:03.774 --rc geninfo_unexecuted_blocks=1 00:06:03.774 00:06:03.774 ' 00:06:03.774 05:07:56 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:03.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.774 --rc genhtml_branch_coverage=1 00:06:03.774 --rc genhtml_function_coverage=1 00:06:03.774 --rc genhtml_legend=1 00:06:03.774 --rc geninfo_all_blocks=1 00:06:03.774 --rc geninfo_unexecuted_blocks=1 00:06:03.774 00:06:03.774 ' 00:06:03.774 05:07:56 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:03.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.774 --rc genhtml_branch_coverage=1 00:06:03.774 --rc genhtml_function_coverage=1 00:06:03.774 --rc genhtml_legend=1 00:06:03.774 --rc geninfo_all_blocks=1 00:06:03.774 --rc geninfo_unexecuted_blocks=1 00:06:03.774 00:06:03.774 ' 00:06:03.774 05:07:56 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:03.774 OK 00:06:03.774 05:07:56 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:03.774 00:06:03.774 real 0m0.174s 00:06:03.774 user 0m0.096s 00:06:03.774 sys 0m0.083s 00:06:03.774 05:07:56 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:03.775 05:07:56 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:03.775 ************************************ 00:06:03.775 END TEST rpc_client 00:06:03.775 ************************************ 00:06:03.775 05:07:56 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:03.775 05:07:56 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:03.775 05:07:56 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:03.775 05:07:56 -- common/autotest_common.sh@10 -- # set +x 00:06:03.775 ************************************ 00:06:03.775 START TEST json_config 00:06:03.775 ************************************ 00:06:03.775 05:07:56 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:03.775 05:07:56 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:03.775 05:07:56 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:06:03.775 05:07:56 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:04.036 05:07:57 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:04.036 05:07:57 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:04.036 05:07:57 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:04.036 05:07:57 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:04.036 05:07:57 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:04.036 05:07:57 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:04.036 05:07:57 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:04.036 05:07:57 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:04.036 05:07:57 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:04.036 05:07:57 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:04.036 05:07:57 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:04.036 05:07:57 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:04.036 05:07:57 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:04.036 05:07:57 json_config -- scripts/common.sh@345 -- # : 1 00:06:04.036 05:07:57 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:04.036 05:07:57 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:04.036 05:07:57 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:04.036 05:07:57 json_config -- scripts/common.sh@353 -- # local d=1 00:06:04.036 05:07:57 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:04.036 05:07:57 json_config -- scripts/common.sh@355 -- # echo 1 00:06:04.036 05:07:57 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:04.036 05:07:57 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:04.036 05:07:57 json_config -- scripts/common.sh@353 -- # local d=2 00:06:04.036 05:07:57 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:04.036 05:07:57 json_config -- scripts/common.sh@355 -- # echo 2 00:06:04.036 05:07:57 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:04.036 05:07:57 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:04.036 05:07:57 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:04.036 05:07:57 json_config -- scripts/common.sh@368 -- # return 0 00:06:04.036 05:07:57 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:04.036 05:07:57 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:04.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.036 --rc genhtml_branch_coverage=1 00:06:04.036 --rc genhtml_function_coverage=1 00:06:04.036 --rc genhtml_legend=1 00:06:04.036 --rc geninfo_all_blocks=1 00:06:04.036 --rc geninfo_unexecuted_blocks=1 00:06:04.036 00:06:04.036 ' 00:06:04.036 05:07:57 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:04.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.036 --rc genhtml_branch_coverage=1 00:06:04.036 --rc genhtml_function_coverage=1 00:06:04.036 --rc genhtml_legend=1 00:06:04.036 --rc geninfo_all_blocks=1 00:06:04.036 --rc geninfo_unexecuted_blocks=1 00:06:04.036 00:06:04.036 ' 00:06:04.036 05:07:57 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:04.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.036 --rc genhtml_branch_coverage=1 00:06:04.036 --rc genhtml_function_coverage=1 00:06:04.036 --rc genhtml_legend=1 00:06:04.036 --rc geninfo_all_blocks=1 00:06:04.036 --rc geninfo_unexecuted_blocks=1 00:06:04.036 00:06:04.036 ' 00:06:04.036 05:07:57 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:04.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.036 --rc genhtml_branch_coverage=1 00:06:04.036 --rc genhtml_function_coverage=1 00:06:04.036 --rc genhtml_legend=1 00:06:04.036 --rc geninfo_all_blocks=1 00:06:04.036 --rc geninfo_unexecuted_blocks=1 00:06:04.036 00:06:04.036 ' 00:06:04.036 05:07:57 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:04.036 05:07:57 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:04.036 05:07:57 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:04.036 05:07:57 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:04.037 05:07:57 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:04.037 05:07:57 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:04.037 05:07:57 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:04.037 05:07:57 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:04.037 05:07:57 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:04.037 05:07:57 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:04.037 05:07:57 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:04.037 05:07:57 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:04.037 05:07:57 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d8a8bdb7-2cda-4d81-be2d-e7d8b190f340 00:06:04.037 05:07:57 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=d8a8bdb7-2cda-4d81-be2d-e7d8b190f340 00:06:04.037 05:07:57 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:04.037 05:07:57 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:04.037 05:07:57 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:04.037 05:07:57 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:04.037 05:07:57 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:04.037 05:07:57 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:04.037 05:07:57 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:04.037 05:07:57 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:04.037 05:07:57 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:04.037 05:07:57 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:04.037 05:07:57 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:04.037 05:07:57 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:04.037 05:07:57 json_config -- paths/export.sh@5 -- # export PATH 00:06:04.037 05:07:57 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:04.037 05:07:57 json_config -- nvmf/common.sh@51 -- # : 0 00:06:04.037 05:07:57 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:04.037 05:07:57 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:04.037 05:07:57 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:04.037 05:07:57 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:04.037 05:07:57 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:04.037 05:07:57 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:04.037 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:04.037 05:07:57 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:04.037 05:07:57 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:04.037 05:07:57 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:04.037 05:07:57 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:04.037 05:07:57 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:04.037 05:07:57 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:04.037 05:07:57 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:04.037 05:07:57 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:04.037 05:07:57 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:06:04.037 WARNING: No tests are enabled so not running JSON configuration tests 00:06:04.037 05:07:57 json_config -- json_config/json_config.sh@28 -- # exit 0 00:06:04.037 ************************************ 00:06:04.037 END TEST json_config 00:06:04.037 ************************************ 00:06:04.037 00:06:04.037 real 0m0.139s 00:06:04.037 user 0m0.089s 00:06:04.037 sys 0m0.052s 00:06:04.037 05:07:57 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:04.037 05:07:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:04.037 05:07:57 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:04.037 05:07:57 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:04.037 05:07:57 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:04.037 05:07:57 -- common/autotest_common.sh@10 -- # set +x 00:06:04.037 ************************************ 00:06:04.037 START TEST json_config_extra_key 00:06:04.037 ************************************ 00:06:04.037 05:07:57 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:04.037 05:07:57 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:04.037 05:07:57 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:06:04.037 05:07:57 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:04.037 05:07:57 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:04.037 05:07:57 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:04.037 05:07:57 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:04.037 05:07:57 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:04.037 05:07:57 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:04.037 05:07:57 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:04.037 05:07:57 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:04.037 05:07:57 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:04.037 05:07:57 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:04.037 05:07:57 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:04.037 05:07:57 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:04.037 05:07:57 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:04.037 05:07:57 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:04.037 05:07:57 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:04.037 05:07:57 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:04.037 05:07:57 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:04.037 05:07:57 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:04.037 05:07:57 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:04.037 05:07:57 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:04.037 05:07:57 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:04.037 05:07:57 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:04.037 05:07:57 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:04.037 05:07:57 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:04.037 05:07:57 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:04.037 05:07:57 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:04.037 05:07:57 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:04.037 05:07:57 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:04.037 05:07:57 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:04.037 05:07:57 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:04.037 05:07:57 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:04.037 05:07:57 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:04.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.037 --rc genhtml_branch_coverage=1 00:06:04.037 --rc genhtml_function_coverage=1 00:06:04.037 --rc genhtml_legend=1 00:06:04.037 --rc geninfo_all_blocks=1 00:06:04.037 --rc geninfo_unexecuted_blocks=1 00:06:04.037 00:06:04.037 ' 00:06:04.037 05:07:57 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:04.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.037 --rc genhtml_branch_coverage=1 00:06:04.037 --rc genhtml_function_coverage=1 00:06:04.037 --rc genhtml_legend=1 00:06:04.037 --rc geninfo_all_blocks=1 00:06:04.037 --rc geninfo_unexecuted_blocks=1 00:06:04.037 00:06:04.037 ' 00:06:04.037 05:07:57 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:04.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.038 --rc genhtml_branch_coverage=1 00:06:04.038 --rc genhtml_function_coverage=1 00:06:04.038 --rc genhtml_legend=1 00:06:04.038 --rc geninfo_all_blocks=1 00:06:04.038 --rc geninfo_unexecuted_blocks=1 00:06:04.038 00:06:04.038 ' 00:06:04.038 05:07:57 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:04.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.038 --rc genhtml_branch_coverage=1 00:06:04.038 --rc genhtml_function_coverage=1 00:06:04.038 --rc genhtml_legend=1 00:06:04.038 --rc geninfo_all_blocks=1 00:06:04.038 --rc geninfo_unexecuted_blocks=1 00:06:04.038 00:06:04.038 ' 00:06:04.038 05:07:57 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:04.038 05:07:57 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:04.038 05:07:57 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:04.038 05:07:57 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:04.038 05:07:57 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:04.038 05:07:57 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:04.038 05:07:57 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:04.038 05:07:57 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:04.038 05:07:57 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:04.038 05:07:57 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:04.038 05:07:57 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:04.038 05:07:57 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:04.038 05:07:57 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d8a8bdb7-2cda-4d81-be2d-e7d8b190f340 00:06:04.038 05:07:57 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=d8a8bdb7-2cda-4d81-be2d-e7d8b190f340 00:06:04.038 05:07:57 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:04.038 05:07:57 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:04.038 05:07:57 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:04.038 05:07:57 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:04.038 05:07:57 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:04.038 05:07:57 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:04.038 05:07:57 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:04.038 05:07:57 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:04.038 05:07:57 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:04.038 05:07:57 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:04.038 05:07:57 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:04.038 05:07:57 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:04.038 05:07:57 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:04.038 05:07:57 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:04.038 05:07:57 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:04.038 05:07:57 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:04.038 05:07:57 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:04.038 05:07:57 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:04.038 05:07:57 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:04.038 05:07:57 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:04.038 05:07:57 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:04.038 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:04.038 05:07:57 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:04.038 05:07:57 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:04.038 05:07:57 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:04.038 05:07:57 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:04.038 05:07:57 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:04.038 05:07:57 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:04.038 05:07:57 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:04.038 05:07:57 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:04.038 05:07:57 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:04.038 05:07:57 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:04.038 05:07:57 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:04.038 05:07:57 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:04.038 05:07:57 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:04.038 05:07:57 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:04.038 INFO: launching applications... 00:06:04.038 05:07:57 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:04.038 05:07:57 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:04.038 05:07:57 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:04.038 05:07:57 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:04.038 05:07:57 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:04.038 05:07:57 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:04.038 05:07:57 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:04.038 05:07:57 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:04.038 05:07:57 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=70412 00:06:04.038 05:07:57 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:04.038 Waiting for target to run... 00:06:04.038 05:07:57 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 70412 /var/tmp/spdk_tgt.sock 00:06:04.038 05:07:57 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 70412 ']' 00:06:04.038 05:07:57 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:04.038 05:07:57 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:04.038 05:07:57 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:04.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:04.038 05:07:57 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:04.038 05:07:57 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:04.038 05:07:57 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:04.298 [2024-11-10 05:07:57.300204] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:04.298 [2024-11-10 05:07:57.300296] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70412 ] 00:06:04.557 [2024-11-10 05:07:57.586450] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.557 [2024-11-10 05:07:57.604512] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.124 00:06:05.124 INFO: shutting down applications... 00:06:05.124 05:07:58 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:05.124 05:07:58 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:06:05.124 05:07:58 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:05.124 05:07:58 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:05.124 05:07:58 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:05.124 05:07:58 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:05.124 05:07:58 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:05.124 05:07:58 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 70412 ]] 00:06:05.124 05:07:58 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 70412 00:06:05.124 05:07:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:05.124 05:07:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:05.124 05:07:58 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 70412 00:06:05.124 05:07:58 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:05.695 05:07:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:05.695 05:07:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:05.695 05:07:58 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 70412 00:06:05.695 05:07:58 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:05.695 05:07:58 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:05.695 05:07:58 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:05.695 SPDK target shutdown done 00:06:05.695 Success 00:06:05.695 05:07:58 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:05.695 05:07:58 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:05.695 00:06:05.695 real 0m1.538s 00:06:05.695 user 0m1.265s 00:06:05.695 sys 0m0.317s 00:06:05.695 ************************************ 00:06:05.695 END TEST json_config_extra_key 00:06:05.695 ************************************ 00:06:05.695 05:07:58 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:05.695 05:07:58 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:05.695 05:07:58 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:05.695 05:07:58 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:05.695 05:07:58 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:05.695 05:07:58 -- common/autotest_common.sh@10 -- # set +x 00:06:05.695 ************************************ 00:06:05.695 START TEST alias_rpc 00:06:05.695 ************************************ 00:06:05.695 05:07:58 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:05.695 * Looking for test storage... 00:06:05.695 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:05.695 05:07:58 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:05.695 05:07:58 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:06:05.695 05:07:58 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:05.695 05:07:58 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:05.695 05:07:58 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:05.695 05:07:58 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:05.695 05:07:58 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:05.695 05:07:58 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:05.695 05:07:58 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:05.695 05:07:58 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:05.695 05:07:58 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:05.695 05:07:58 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:05.695 05:07:58 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:05.695 05:07:58 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:05.695 05:07:58 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:05.695 05:07:58 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:05.695 05:07:58 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:05.695 05:07:58 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:05.695 05:07:58 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:05.695 05:07:58 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:05.695 05:07:58 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:05.695 05:07:58 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:05.695 05:07:58 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:05.695 05:07:58 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:05.695 05:07:58 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:05.695 05:07:58 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:05.695 05:07:58 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:05.695 05:07:58 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:05.695 05:07:58 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:05.695 05:07:58 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:05.695 05:07:58 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:05.695 05:07:58 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:05.695 05:07:58 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:05.695 05:07:58 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:05.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.695 --rc genhtml_branch_coverage=1 00:06:05.695 --rc genhtml_function_coverage=1 00:06:05.695 --rc genhtml_legend=1 00:06:05.695 --rc geninfo_all_blocks=1 00:06:05.695 --rc geninfo_unexecuted_blocks=1 00:06:05.695 00:06:05.695 ' 00:06:05.695 05:07:58 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:05.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.695 --rc genhtml_branch_coverage=1 00:06:05.695 --rc genhtml_function_coverage=1 00:06:05.695 --rc genhtml_legend=1 00:06:05.695 --rc geninfo_all_blocks=1 00:06:05.696 --rc geninfo_unexecuted_blocks=1 00:06:05.696 00:06:05.696 ' 00:06:05.696 05:07:58 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:05.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.696 --rc genhtml_branch_coverage=1 00:06:05.696 --rc genhtml_function_coverage=1 00:06:05.696 --rc genhtml_legend=1 00:06:05.696 --rc geninfo_all_blocks=1 00:06:05.696 --rc geninfo_unexecuted_blocks=1 00:06:05.696 00:06:05.696 ' 00:06:05.696 05:07:58 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:05.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.696 --rc genhtml_branch_coverage=1 00:06:05.696 --rc genhtml_function_coverage=1 00:06:05.696 --rc genhtml_legend=1 00:06:05.696 --rc geninfo_all_blocks=1 00:06:05.696 --rc geninfo_unexecuted_blocks=1 00:06:05.696 00:06:05.696 ' 00:06:05.696 05:07:58 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:05.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.696 05:07:58 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=70486 00:06:05.696 05:07:58 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 70486 00:06:05.696 05:07:58 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:05.696 05:07:58 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 70486 ']' 00:06:05.696 05:07:58 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.696 05:07:58 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:05.696 05:07:58 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.696 05:07:58 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:05.696 05:07:58 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.696 [2024-11-10 05:07:58.907732] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:05.696 [2024-11-10 05:07:58.907978] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70486 ] 00:06:05.957 [2024-11-10 05:07:59.055909] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.957 [2024-11-10 05:07:59.089450] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.523 05:07:59 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:06.523 05:07:59 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:06.523 05:07:59 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:06.780 05:07:59 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 70486 00:06:06.780 05:07:59 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 70486 ']' 00:06:06.780 05:07:59 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 70486 00:06:06.780 05:07:59 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:06:06.780 05:07:59 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:06.780 05:07:59 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70486 00:06:06.780 killing process with pid 70486 00:06:06.780 05:07:59 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:06.780 05:07:59 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:06.780 05:07:59 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70486' 00:06:06.780 05:07:59 alias_rpc -- common/autotest_common.sh@969 -- # kill 70486 00:06:06.780 05:07:59 alias_rpc -- common/autotest_common.sh@974 -- # wait 70486 00:06:07.038 ************************************ 00:06:07.038 END TEST alias_rpc 00:06:07.038 ************************************ 00:06:07.038 00:06:07.038 real 0m1.558s 00:06:07.038 user 0m1.709s 00:06:07.038 sys 0m0.353s 00:06:07.038 05:08:00 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:07.038 05:08:00 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.299 05:08:00 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:07.299 05:08:00 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:07.299 05:08:00 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:07.299 05:08:00 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:07.299 05:08:00 -- common/autotest_common.sh@10 -- # set +x 00:06:07.299 ************************************ 00:06:07.299 START TEST spdkcli_tcp 00:06:07.299 ************************************ 00:06:07.299 05:08:00 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:07.299 * Looking for test storage... 00:06:07.299 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:07.299 05:08:00 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:07.299 05:08:00 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:06:07.299 05:08:00 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:07.299 05:08:00 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:07.299 05:08:00 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:07.299 05:08:00 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:07.299 05:08:00 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:07.299 05:08:00 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:07.299 05:08:00 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:07.299 05:08:00 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:07.299 05:08:00 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:07.299 05:08:00 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:07.299 05:08:00 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:07.299 05:08:00 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:07.299 05:08:00 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:07.299 05:08:00 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:07.299 05:08:00 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:07.299 05:08:00 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:07.299 05:08:00 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:07.299 05:08:00 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:07.299 05:08:00 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:07.299 05:08:00 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:07.299 05:08:00 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:07.299 05:08:00 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:07.299 05:08:00 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:07.299 05:08:00 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:07.299 05:08:00 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:07.299 05:08:00 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:07.299 05:08:00 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:07.299 05:08:00 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:07.299 05:08:00 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:07.299 05:08:00 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:07.299 05:08:00 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:07.299 05:08:00 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:07.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.299 --rc genhtml_branch_coverage=1 00:06:07.299 --rc genhtml_function_coverage=1 00:06:07.299 --rc genhtml_legend=1 00:06:07.299 --rc geninfo_all_blocks=1 00:06:07.299 --rc geninfo_unexecuted_blocks=1 00:06:07.299 00:06:07.299 ' 00:06:07.299 05:08:00 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:07.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.299 --rc genhtml_branch_coverage=1 00:06:07.299 --rc genhtml_function_coverage=1 00:06:07.299 --rc genhtml_legend=1 00:06:07.299 --rc geninfo_all_blocks=1 00:06:07.299 --rc geninfo_unexecuted_blocks=1 00:06:07.299 00:06:07.299 ' 00:06:07.299 05:08:00 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:07.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.299 --rc genhtml_branch_coverage=1 00:06:07.299 --rc genhtml_function_coverage=1 00:06:07.299 --rc genhtml_legend=1 00:06:07.299 --rc geninfo_all_blocks=1 00:06:07.299 --rc geninfo_unexecuted_blocks=1 00:06:07.299 00:06:07.299 ' 00:06:07.299 05:08:00 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:07.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.299 --rc genhtml_branch_coverage=1 00:06:07.299 --rc genhtml_function_coverage=1 00:06:07.299 --rc genhtml_legend=1 00:06:07.299 --rc geninfo_all_blocks=1 00:06:07.299 --rc geninfo_unexecuted_blocks=1 00:06:07.299 00:06:07.299 ' 00:06:07.299 05:08:00 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:07.299 05:08:00 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:07.299 05:08:00 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:07.299 05:08:00 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:07.299 05:08:00 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:07.299 05:08:00 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:07.299 05:08:00 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:07.299 05:08:00 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:07.299 05:08:00 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:07.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.299 05:08:00 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=70565 00:06:07.299 05:08:00 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 70565 00:06:07.299 05:08:00 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 70565 ']' 00:06:07.299 05:08:00 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:07.299 05:08:00 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.299 05:08:00 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:07.299 05:08:00 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.299 05:08:00 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:07.299 05:08:00 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:07.299 [2024-11-10 05:08:00.511659] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:07.299 [2024-11-10 05:08:00.511794] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70565 ] 00:06:07.560 [2024-11-10 05:08:00.660454] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:07.560 [2024-11-10 05:08:00.713136] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.560 [2024-11-10 05:08:00.713190] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.131 05:08:01 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:08.131 05:08:01 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:06:08.131 05:08:01 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=70582 00:06:08.131 05:08:01 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:08.131 05:08:01 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:08.392 [ 00:06:08.392 "bdev_malloc_delete", 00:06:08.392 "bdev_malloc_create", 00:06:08.392 "bdev_null_resize", 00:06:08.392 "bdev_null_delete", 00:06:08.392 "bdev_null_create", 00:06:08.392 "bdev_nvme_cuse_unregister", 00:06:08.392 "bdev_nvme_cuse_register", 00:06:08.392 "bdev_opal_new_user", 00:06:08.392 "bdev_opal_set_lock_state", 00:06:08.392 "bdev_opal_delete", 00:06:08.392 "bdev_opal_get_info", 00:06:08.392 "bdev_opal_create", 00:06:08.392 "bdev_nvme_opal_revert", 00:06:08.392 "bdev_nvme_opal_init", 00:06:08.392 "bdev_nvme_send_cmd", 00:06:08.392 "bdev_nvme_set_keys", 00:06:08.393 "bdev_nvme_get_path_iostat", 00:06:08.393 "bdev_nvme_get_mdns_discovery_info", 00:06:08.393 "bdev_nvme_stop_mdns_discovery", 00:06:08.393 "bdev_nvme_start_mdns_discovery", 00:06:08.393 "bdev_nvme_set_multipath_policy", 00:06:08.393 "bdev_nvme_set_preferred_path", 00:06:08.393 "bdev_nvme_get_io_paths", 00:06:08.393 "bdev_nvme_remove_error_injection", 00:06:08.393 "bdev_nvme_add_error_injection", 00:06:08.393 "bdev_nvme_get_discovery_info", 00:06:08.393 "bdev_nvme_stop_discovery", 00:06:08.393 "bdev_nvme_start_discovery", 00:06:08.393 "bdev_nvme_get_controller_health_info", 00:06:08.393 "bdev_nvme_disable_controller", 00:06:08.393 "bdev_nvme_enable_controller", 00:06:08.393 "bdev_nvme_reset_controller", 00:06:08.393 "bdev_nvme_get_transport_statistics", 00:06:08.393 "bdev_nvme_apply_firmware", 00:06:08.393 "bdev_nvme_detach_controller", 00:06:08.393 "bdev_nvme_get_controllers", 00:06:08.393 "bdev_nvme_attach_controller", 00:06:08.393 "bdev_nvme_set_hotplug", 00:06:08.393 "bdev_nvme_set_options", 00:06:08.393 "bdev_passthru_delete", 00:06:08.393 "bdev_passthru_create", 00:06:08.393 "bdev_lvol_set_parent_bdev", 00:06:08.393 "bdev_lvol_set_parent", 00:06:08.393 "bdev_lvol_check_shallow_copy", 00:06:08.393 "bdev_lvol_start_shallow_copy", 00:06:08.393 "bdev_lvol_grow_lvstore", 00:06:08.393 "bdev_lvol_get_lvols", 00:06:08.393 "bdev_lvol_get_lvstores", 00:06:08.393 "bdev_lvol_delete", 00:06:08.393 "bdev_lvol_set_read_only", 00:06:08.393 "bdev_lvol_resize", 00:06:08.393 "bdev_lvol_decouple_parent", 00:06:08.393 "bdev_lvol_inflate", 00:06:08.393 "bdev_lvol_rename", 00:06:08.393 "bdev_lvol_clone_bdev", 00:06:08.393 "bdev_lvol_clone", 00:06:08.393 "bdev_lvol_snapshot", 00:06:08.393 "bdev_lvol_create", 00:06:08.393 "bdev_lvol_delete_lvstore", 00:06:08.393 "bdev_lvol_rename_lvstore", 00:06:08.393 "bdev_lvol_create_lvstore", 00:06:08.393 "bdev_raid_set_options", 00:06:08.393 "bdev_raid_remove_base_bdev", 00:06:08.393 "bdev_raid_add_base_bdev", 00:06:08.393 "bdev_raid_delete", 00:06:08.393 "bdev_raid_create", 00:06:08.393 "bdev_raid_get_bdevs", 00:06:08.393 "bdev_error_inject_error", 00:06:08.393 "bdev_error_delete", 00:06:08.393 "bdev_error_create", 00:06:08.393 "bdev_split_delete", 00:06:08.393 "bdev_split_create", 00:06:08.393 "bdev_delay_delete", 00:06:08.393 "bdev_delay_create", 00:06:08.393 "bdev_delay_update_latency", 00:06:08.393 "bdev_zone_block_delete", 00:06:08.393 "bdev_zone_block_create", 00:06:08.393 "blobfs_create", 00:06:08.393 "blobfs_detect", 00:06:08.393 "blobfs_set_cache_size", 00:06:08.393 "bdev_xnvme_delete", 00:06:08.393 "bdev_xnvme_create", 00:06:08.393 "bdev_aio_delete", 00:06:08.393 "bdev_aio_rescan", 00:06:08.393 "bdev_aio_create", 00:06:08.393 "bdev_ftl_set_property", 00:06:08.393 "bdev_ftl_get_properties", 00:06:08.393 "bdev_ftl_get_stats", 00:06:08.393 "bdev_ftl_unmap", 00:06:08.393 "bdev_ftl_unload", 00:06:08.393 "bdev_ftl_delete", 00:06:08.393 "bdev_ftl_load", 00:06:08.393 "bdev_ftl_create", 00:06:08.393 "bdev_virtio_attach_controller", 00:06:08.393 "bdev_virtio_scsi_get_devices", 00:06:08.393 "bdev_virtio_detach_controller", 00:06:08.393 "bdev_virtio_blk_set_hotplug", 00:06:08.393 "bdev_iscsi_delete", 00:06:08.393 "bdev_iscsi_create", 00:06:08.393 "bdev_iscsi_set_options", 00:06:08.393 "accel_error_inject_error", 00:06:08.393 "ioat_scan_accel_module", 00:06:08.393 "dsa_scan_accel_module", 00:06:08.393 "iaa_scan_accel_module", 00:06:08.393 "keyring_file_remove_key", 00:06:08.393 "keyring_file_add_key", 00:06:08.393 "keyring_linux_set_options", 00:06:08.393 "fsdev_aio_delete", 00:06:08.393 "fsdev_aio_create", 00:06:08.393 "iscsi_get_histogram", 00:06:08.393 "iscsi_enable_histogram", 00:06:08.393 "iscsi_set_options", 00:06:08.393 "iscsi_get_auth_groups", 00:06:08.393 "iscsi_auth_group_remove_secret", 00:06:08.393 "iscsi_auth_group_add_secret", 00:06:08.393 "iscsi_delete_auth_group", 00:06:08.393 "iscsi_create_auth_group", 00:06:08.393 "iscsi_set_discovery_auth", 00:06:08.393 "iscsi_get_options", 00:06:08.393 "iscsi_target_node_request_logout", 00:06:08.393 "iscsi_target_node_set_redirect", 00:06:08.393 "iscsi_target_node_set_auth", 00:06:08.393 "iscsi_target_node_add_lun", 00:06:08.393 "iscsi_get_stats", 00:06:08.393 "iscsi_get_connections", 00:06:08.393 "iscsi_portal_group_set_auth", 00:06:08.393 "iscsi_start_portal_group", 00:06:08.393 "iscsi_delete_portal_group", 00:06:08.393 "iscsi_create_portal_group", 00:06:08.393 "iscsi_get_portal_groups", 00:06:08.393 "iscsi_delete_target_node", 00:06:08.393 "iscsi_target_node_remove_pg_ig_maps", 00:06:08.393 "iscsi_target_node_add_pg_ig_maps", 00:06:08.393 "iscsi_create_target_node", 00:06:08.393 "iscsi_get_target_nodes", 00:06:08.393 "iscsi_delete_initiator_group", 00:06:08.393 "iscsi_initiator_group_remove_initiators", 00:06:08.393 "iscsi_initiator_group_add_initiators", 00:06:08.393 "iscsi_create_initiator_group", 00:06:08.393 "iscsi_get_initiator_groups", 00:06:08.393 "nvmf_set_crdt", 00:06:08.393 "nvmf_set_config", 00:06:08.393 "nvmf_set_max_subsystems", 00:06:08.393 "nvmf_stop_mdns_prr", 00:06:08.393 "nvmf_publish_mdns_prr", 00:06:08.393 "nvmf_subsystem_get_listeners", 00:06:08.393 "nvmf_subsystem_get_qpairs", 00:06:08.393 "nvmf_subsystem_get_controllers", 00:06:08.393 "nvmf_get_stats", 00:06:08.393 "nvmf_get_transports", 00:06:08.393 "nvmf_create_transport", 00:06:08.393 "nvmf_get_targets", 00:06:08.393 "nvmf_delete_target", 00:06:08.393 "nvmf_create_target", 00:06:08.393 "nvmf_subsystem_allow_any_host", 00:06:08.393 "nvmf_subsystem_set_keys", 00:06:08.393 "nvmf_subsystem_remove_host", 00:06:08.393 "nvmf_subsystem_add_host", 00:06:08.393 "nvmf_ns_remove_host", 00:06:08.393 "nvmf_ns_add_host", 00:06:08.393 "nvmf_subsystem_remove_ns", 00:06:08.393 "nvmf_subsystem_set_ns_ana_group", 00:06:08.393 "nvmf_subsystem_add_ns", 00:06:08.393 "nvmf_subsystem_listener_set_ana_state", 00:06:08.393 "nvmf_discovery_get_referrals", 00:06:08.393 "nvmf_discovery_remove_referral", 00:06:08.393 "nvmf_discovery_add_referral", 00:06:08.393 "nvmf_subsystem_remove_listener", 00:06:08.393 "nvmf_subsystem_add_listener", 00:06:08.393 "nvmf_delete_subsystem", 00:06:08.393 "nvmf_create_subsystem", 00:06:08.393 "nvmf_get_subsystems", 00:06:08.393 "env_dpdk_get_mem_stats", 00:06:08.393 "nbd_get_disks", 00:06:08.393 "nbd_stop_disk", 00:06:08.393 "nbd_start_disk", 00:06:08.393 "ublk_recover_disk", 00:06:08.393 "ublk_get_disks", 00:06:08.393 "ublk_stop_disk", 00:06:08.393 "ublk_start_disk", 00:06:08.393 "ublk_destroy_target", 00:06:08.393 "ublk_create_target", 00:06:08.393 "virtio_blk_create_transport", 00:06:08.393 "virtio_blk_get_transports", 00:06:08.393 "vhost_controller_set_coalescing", 00:06:08.393 "vhost_get_controllers", 00:06:08.393 "vhost_delete_controller", 00:06:08.393 "vhost_create_blk_controller", 00:06:08.393 "vhost_scsi_controller_remove_target", 00:06:08.393 "vhost_scsi_controller_add_target", 00:06:08.393 "vhost_start_scsi_controller", 00:06:08.393 "vhost_create_scsi_controller", 00:06:08.393 "thread_set_cpumask", 00:06:08.393 "scheduler_set_options", 00:06:08.393 "framework_get_governor", 00:06:08.393 "framework_get_scheduler", 00:06:08.393 "framework_set_scheduler", 00:06:08.393 "framework_get_reactors", 00:06:08.393 "thread_get_io_channels", 00:06:08.393 "thread_get_pollers", 00:06:08.393 "thread_get_stats", 00:06:08.393 "framework_monitor_context_switch", 00:06:08.393 "spdk_kill_instance", 00:06:08.393 "log_enable_timestamps", 00:06:08.393 "log_get_flags", 00:06:08.393 "log_clear_flag", 00:06:08.393 "log_set_flag", 00:06:08.393 "log_get_level", 00:06:08.393 "log_set_level", 00:06:08.393 "log_get_print_level", 00:06:08.393 "log_set_print_level", 00:06:08.393 "framework_enable_cpumask_locks", 00:06:08.393 "framework_disable_cpumask_locks", 00:06:08.393 "framework_wait_init", 00:06:08.393 "framework_start_init", 00:06:08.393 "scsi_get_devices", 00:06:08.393 "bdev_get_histogram", 00:06:08.393 "bdev_enable_histogram", 00:06:08.393 "bdev_set_qos_limit", 00:06:08.393 "bdev_set_qd_sampling_period", 00:06:08.393 "bdev_get_bdevs", 00:06:08.393 "bdev_reset_iostat", 00:06:08.393 "bdev_get_iostat", 00:06:08.393 "bdev_examine", 00:06:08.393 "bdev_wait_for_examine", 00:06:08.393 "bdev_set_options", 00:06:08.393 "accel_get_stats", 00:06:08.393 "accel_set_options", 00:06:08.393 "accel_set_driver", 00:06:08.393 "accel_crypto_key_destroy", 00:06:08.393 "accel_crypto_keys_get", 00:06:08.393 "accel_crypto_key_create", 00:06:08.393 "accel_assign_opc", 00:06:08.393 "accel_get_module_info", 00:06:08.393 "accel_get_opc_assignments", 00:06:08.393 "vmd_rescan", 00:06:08.393 "vmd_remove_device", 00:06:08.393 "vmd_enable", 00:06:08.393 "sock_get_default_impl", 00:06:08.393 "sock_set_default_impl", 00:06:08.393 "sock_impl_set_options", 00:06:08.393 "sock_impl_get_options", 00:06:08.393 "iobuf_get_stats", 00:06:08.393 "iobuf_set_options", 00:06:08.393 "keyring_get_keys", 00:06:08.393 "framework_get_pci_devices", 00:06:08.393 "framework_get_config", 00:06:08.393 "framework_get_subsystems", 00:06:08.394 "fsdev_set_opts", 00:06:08.394 "fsdev_get_opts", 00:06:08.394 "trace_get_info", 00:06:08.394 "trace_get_tpoint_group_mask", 00:06:08.394 "trace_disable_tpoint_group", 00:06:08.394 "trace_enable_tpoint_group", 00:06:08.394 "trace_clear_tpoint_mask", 00:06:08.394 "trace_set_tpoint_mask", 00:06:08.394 "notify_get_notifications", 00:06:08.394 "notify_get_types", 00:06:08.394 "spdk_get_version", 00:06:08.394 "rpc_get_methods" 00:06:08.394 ] 00:06:08.394 05:08:01 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:08.394 05:08:01 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:08.394 05:08:01 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:08.394 05:08:01 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:08.394 05:08:01 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 70565 00:06:08.394 05:08:01 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 70565 ']' 00:06:08.394 05:08:01 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 70565 00:06:08.394 05:08:01 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:06:08.394 05:08:01 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:08.394 05:08:01 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70565 00:06:08.394 killing process with pid 70565 00:06:08.394 05:08:01 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:08.394 05:08:01 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:08.394 05:08:01 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70565' 00:06:08.394 05:08:01 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 70565 00:06:08.394 05:08:01 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 70565 00:06:08.653 ************************************ 00:06:08.653 END TEST spdkcli_tcp 00:06:08.653 ************************************ 00:06:08.653 00:06:08.653 real 0m1.566s 00:06:08.653 user 0m2.697s 00:06:08.653 sys 0m0.436s 00:06:08.653 05:08:01 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:08.653 05:08:01 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:08.911 05:08:01 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:08.911 05:08:01 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:08.911 05:08:01 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:08.911 05:08:01 -- common/autotest_common.sh@10 -- # set +x 00:06:08.911 ************************************ 00:06:08.911 START TEST dpdk_mem_utility 00:06:08.911 ************************************ 00:06:08.911 05:08:01 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:08.911 * Looking for test storage... 00:06:08.911 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:08.911 05:08:01 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:08.911 05:08:01 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:06:08.911 05:08:01 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:08.911 05:08:02 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:08.911 05:08:02 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:08.911 05:08:02 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:08.911 05:08:02 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:08.911 05:08:02 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:08.911 05:08:02 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:08.911 05:08:02 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:08.911 05:08:02 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:08.911 05:08:02 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:08.911 05:08:02 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:08.911 05:08:02 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:08.911 05:08:02 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:08.911 05:08:02 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:08.911 05:08:02 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:08.911 05:08:02 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:08.911 05:08:02 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:08.911 05:08:02 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:08.911 05:08:02 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:08.911 05:08:02 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:08.911 05:08:02 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:08.911 05:08:02 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:08.911 05:08:02 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:08.911 05:08:02 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:08.911 05:08:02 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:08.911 05:08:02 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:08.911 05:08:02 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:08.911 05:08:02 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:08.911 05:08:02 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:08.911 05:08:02 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:08.911 05:08:02 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:08.911 05:08:02 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:08.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.911 --rc genhtml_branch_coverage=1 00:06:08.911 --rc genhtml_function_coverage=1 00:06:08.911 --rc genhtml_legend=1 00:06:08.911 --rc geninfo_all_blocks=1 00:06:08.911 --rc geninfo_unexecuted_blocks=1 00:06:08.911 00:06:08.911 ' 00:06:08.911 05:08:02 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:08.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.911 --rc genhtml_branch_coverage=1 00:06:08.911 --rc genhtml_function_coverage=1 00:06:08.911 --rc genhtml_legend=1 00:06:08.911 --rc geninfo_all_blocks=1 00:06:08.911 --rc geninfo_unexecuted_blocks=1 00:06:08.911 00:06:08.911 ' 00:06:08.911 05:08:02 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:08.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.911 --rc genhtml_branch_coverage=1 00:06:08.911 --rc genhtml_function_coverage=1 00:06:08.911 --rc genhtml_legend=1 00:06:08.911 --rc geninfo_all_blocks=1 00:06:08.911 --rc geninfo_unexecuted_blocks=1 00:06:08.911 00:06:08.911 ' 00:06:08.911 05:08:02 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:08.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.911 --rc genhtml_branch_coverage=1 00:06:08.911 --rc genhtml_function_coverage=1 00:06:08.911 --rc genhtml_legend=1 00:06:08.911 --rc geninfo_all_blocks=1 00:06:08.911 --rc geninfo_unexecuted_blocks=1 00:06:08.911 00:06:08.911 ' 00:06:08.911 05:08:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:08.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.911 05:08:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=70660 00:06:08.911 05:08:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:08.911 05:08:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 70660 00:06:08.911 05:08:02 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 70660 ']' 00:06:08.911 05:08:02 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.911 05:08:02 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:08.911 05:08:02 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.911 05:08:02 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:08.911 05:08:02 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:08.911 [2024-11-10 05:08:02.095943] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:08.911 [2024-11-10 05:08:02.096196] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70660 ] 00:06:09.168 [2024-11-10 05:08:02.240397] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.168 [2024-11-10 05:08:02.280385] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.737 05:08:02 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:09.737 05:08:02 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:06:09.737 05:08:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:09.737 05:08:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:09.737 05:08:02 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.737 05:08:02 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:09.737 { 00:06:09.737 "filename": "/tmp/spdk_mem_dump.txt" 00:06:09.737 } 00:06:09.737 05:08:02 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.737 05:08:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:09.999 DPDK memory size 860.000000 MiB in 1 heap(s) 00:06:09.999 1 heaps totaling size 860.000000 MiB 00:06:09.999 size: 860.000000 MiB heap id: 0 00:06:09.999 end heaps---------- 00:06:09.999 9 mempools totaling size 642.649841 MiB 00:06:09.999 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:09.999 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:09.999 size: 92.545471 MiB name: bdev_io_70660 00:06:09.999 size: 51.011292 MiB name: evtpool_70660 00:06:09.999 size: 50.003479 MiB name: msgpool_70660 00:06:09.999 size: 36.509338 MiB name: fsdev_io_70660 00:06:09.999 size: 21.763794 MiB name: PDU_Pool 00:06:09.999 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:09.999 size: 0.026123 MiB name: Session_Pool 00:06:09.999 end mempools------- 00:06:09.999 6 memzones totaling size 4.142822 MiB 00:06:09.999 size: 1.000366 MiB name: RG_ring_0_70660 00:06:09.999 size: 1.000366 MiB name: RG_ring_1_70660 00:06:09.999 size: 1.000366 MiB name: RG_ring_4_70660 00:06:09.999 size: 1.000366 MiB name: RG_ring_5_70660 00:06:09.999 size: 0.125366 MiB name: RG_ring_2_70660 00:06:09.999 size: 0.015991 MiB name: RG_ring_3_70660 00:06:09.999 end memzones------- 00:06:09.999 05:08:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:09.999 heap id: 0 total size: 860.000000 MiB number of busy elements: 313 number of free elements: 16 00:06:09.999 list of free elements. size: 13.935425 MiB 00:06:09.999 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:09.999 element at address: 0x200000800000 with size: 1.996948 MiB 00:06:09.999 element at address: 0x20001bc00000 with size: 0.999878 MiB 00:06:09.999 element at address: 0x20001be00000 with size: 0.999878 MiB 00:06:09.999 element at address: 0x200034a00000 with size: 0.994446 MiB 00:06:09.999 element at address: 0x200009600000 with size: 0.959839 MiB 00:06:09.999 element at address: 0x200015e00000 with size: 0.954285 MiB 00:06:09.999 element at address: 0x20001c000000 with size: 0.936584 MiB 00:06:09.999 element at address: 0x200000200000 with size: 0.835022 MiB 00:06:09.999 element at address: 0x20001d800000 with size: 0.567505 MiB 00:06:09.999 element at address: 0x20000d800000 with size: 0.489258 MiB 00:06:09.999 element at address: 0x200003e00000 with size: 0.487732 MiB 00:06:09.999 element at address: 0x20001c200000 with size: 0.485657 MiB 00:06:10.000 element at address: 0x200007000000 with size: 0.480286 MiB 00:06:10.000 element at address: 0x20002ac00000 with size: 0.395752 MiB 00:06:10.000 element at address: 0x200003a00000 with size: 0.352844 MiB 00:06:10.000 list of standard malloc elements. size: 199.267883 MiB 00:06:10.000 element at address: 0x20000d9fff80 with size: 132.000122 MiB 00:06:10.000 element at address: 0x2000097fff80 with size: 64.000122 MiB 00:06:10.000 element at address: 0x20001bcfff80 with size: 1.000122 MiB 00:06:10.000 element at address: 0x20001befff80 with size: 1.000122 MiB 00:06:10.000 element at address: 0x20001c0fff80 with size: 1.000122 MiB 00:06:10.000 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:10.000 element at address: 0x20001c0eff00 with size: 0.062622 MiB 00:06:10.000 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:10.000 element at address: 0x20001c0efdc0 with size: 0.000305 MiB 00:06:10.000 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:06:10.000 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:06:10.000 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:06:10.000 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:06:10.000 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:06:10.000 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:06:10.000 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:06:10.000 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:06:10.000 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:06:10.000 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:06:10.000 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:06:10.000 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:06:10.000 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:06:10.000 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:06:10.000 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:06:10.000 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:06:10.000 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:06:10.000 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:06:10.000 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:06:10.000 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:06:10.000 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:06:10.000 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:06:10.000 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:06:10.000 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:06:10.000 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:06:10.000 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:06:10.000 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:06:10.000 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:06:10.000 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:06:10.000 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:06:10.000 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:06:10.000 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:06:10.000 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:06:10.000 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:06:10.000 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:06:10.000 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:06:10.000 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:06:10.000 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:06:10.000 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:06:10.000 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:10.000 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:10.000 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:10.000 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:10.000 element at address: 0x200003a5a540 with size: 0.000183 MiB 00:06:10.000 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:06:10.000 element at address: 0x200003a5ea00 with size: 0.000183 MiB 00:06:10.000 element at address: 0x200003a7ecc0 with size: 0.000183 MiB 00:06:10.000 element at address: 0x200003a7ed80 with size: 0.000183 MiB 00:06:10.000 element at address: 0x200003a7ee40 with size: 0.000183 MiB 00:06:10.000 element at address: 0x200003a7ef00 with size: 0.000183 MiB 00:06:10.000 element at address: 0x200003a7efc0 with size: 0.000183 MiB 00:06:10.000 element at address: 0x200003a7f080 with size: 0.000183 MiB 00:06:10.000 element at address: 0x200003a7f140 with size: 0.000183 MiB 00:06:10.000 element at address: 0x200003a7f200 with size: 0.000183 MiB 00:06:10.000 element at address: 0x200003a7f2c0 with size: 0.000183 MiB 00:06:10.000 element at address: 0x200003a7f380 with size: 0.000183 MiB 00:06:10.000 element at address: 0x200003a7f440 with size: 0.000183 MiB 00:06:10.000 element at address: 0x200003a7f500 with size: 0.000183 MiB 00:06:10.000 element at address: 0x200003a7f5c0 with size: 0.000183 MiB 00:06:10.000 element at address: 0x200003aff880 with size: 0.000183 MiB 00:06:10.000 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:10.000 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:10.000 element at address: 0x200003e7cdc0 with size: 0.000183 MiB 00:06:10.000 element at address: 0x200003e7ce80 with size: 0.000183 MiB 00:06:10.000 element at address: 0x200003e7cf40 with size: 0.000183 MiB 00:06:10.000 element at address: 0x200003e7d000 with size: 0.000183 MiB 00:06:10.000 element at address: 0x200003e7d0c0 with size: 0.000183 MiB 00:06:10.000 element at address: 0x200003e7d180 with size: 0.000183 MiB 00:06:10.000 element at address: 0x200003e7d240 with size: 0.000183 MiB 00:06:10.000 element at address: 0x200003e7d300 with size: 0.000183 MiB 00:06:10.000 element at address: 0x200003e7d3c0 with size: 0.000183 MiB 00:06:10.000 element at address: 0x200003e7d480 with size: 0.000183 MiB 00:06:10.000 element at address: 0x200003e7d540 with size: 0.000183 MiB 00:06:10.000 element at address: 0x200003e7d600 with size: 0.000183 MiB 00:06:10.000 element at address: 0x200003e7d6c0 with size: 0.000183 MiB 00:06:10.000 element at address: 0x200003e7d780 with size: 0.000183 MiB 00:06:10.000 element at address: 0x200003e7d840 with size: 0.000183 MiB 00:06:10.000 element at address: 0x200003e7d900 with size: 0.000183 MiB 00:06:10.000 element at address: 0x200003e7d9c0 with size: 0.000183 MiB 00:06:10.000 element at address: 0x200003e7da80 with size: 0.000183 MiB 00:06:10.000 element at address: 0x200003e7db40 with size: 0.000183 MiB 00:06:10.000 element at address: 0x200003e7dc00 with size: 0.000183 MiB 00:06:10.000 element at address: 0x200003e7dcc0 with size: 0.000183 MiB 00:06:10.000 element at address: 0x200003e7dd80 with size: 0.000183 MiB 00:06:10.000 element at address: 0x200003e7de40 with size: 0.000183 MiB 00:06:10.000 element at address: 0x200003e7df00 with size: 0.000183 MiB 00:06:10.000 element at address: 0x200003e7dfc0 with size: 0.000183 MiB 00:06:10.000 element at address: 0x200003e7e080 with size: 0.000183 MiB 00:06:10.000 element at address: 0x200003e7e140 with size: 0.000183 MiB 00:06:10.000 element at address: 0x200003e7e200 with size: 0.000183 MiB 00:06:10.000 element at address: 0x200003e7e2c0 with size: 0.000183 MiB 00:06:10.000 element at address: 0x200003e7e380 with size: 0.000183 MiB 00:06:10.000 element at address: 0x200003e7e440 with size: 0.000183 MiB 00:06:10.000 element at address: 0x200003e7e500 with size: 0.000183 MiB 00:06:10.000 element at address: 0x200003e7e5c0 with size: 0.000183 MiB 00:06:10.000 element at address: 0x200003e7e680 with size: 0.000183 MiB 00:06:10.000 element at address: 0x200003e7e740 with size: 0.000183 MiB 00:06:10.000 element at address: 0x200003e7e800 with size: 0.000183 MiB 00:06:10.000 element at address: 0x200003e7e8c0 with size: 0.000183 MiB 00:06:10.000 element at address: 0x200003e7e980 with size: 0.000183 MiB 00:06:10.000 element at address: 0x200003e7ea40 with size: 0.000183 MiB 00:06:10.000 element at address: 0x200003e7eb00 with size: 0.000183 MiB 00:06:10.000 element at address: 0x200003e7ebc0 with size: 0.000183 MiB 00:06:10.000 element at address: 0x200003e7ec80 with size: 0.000183 MiB 00:06:10.000 element at address: 0x200003e7ed40 with size: 0.000183 MiB 00:06:10.000 element at address: 0x200003e7ee00 with size: 0.000183 MiB 00:06:10.000 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:10.000 element at address: 0x20000707af40 with size: 0.000183 MiB 00:06:10.000 element at address: 0x20000707b000 with size: 0.000183 MiB 00:06:10.000 element at address: 0x20000707b0c0 with size: 0.000183 MiB 00:06:10.000 element at address: 0x20000707b180 with size: 0.000183 MiB 00:06:10.000 element at address: 0x20000707b240 with size: 0.000183 MiB 00:06:10.000 element at address: 0x20000707b300 with size: 0.000183 MiB 00:06:10.000 element at address: 0x20000707b3c0 with size: 0.000183 MiB 00:06:10.000 element at address: 0x20000707b480 with size: 0.000183 MiB 00:06:10.000 element at address: 0x20000707b540 with size: 0.000183 MiB 00:06:10.000 element at address: 0x20000707b600 with size: 0.000183 MiB 00:06:10.000 element at address: 0x20000707b6c0 with size: 0.000183 MiB 00:06:10.000 element at address: 0x2000070fb980 with size: 0.000183 MiB 00:06:10.000 element at address: 0x2000096fdd80 with size: 0.000183 MiB 00:06:10.000 element at address: 0x20000d87d400 with size: 0.000183 MiB 00:06:10.000 element at address: 0x20000d87d4c0 with size: 0.000183 MiB 00:06:10.000 element at address: 0x20000d87d580 with size: 0.000183 MiB 00:06:10.000 element at address: 0x20000d87d640 with size: 0.000183 MiB 00:06:10.000 element at address: 0x20000d87d700 with size: 0.000183 MiB 00:06:10.000 element at address: 0x20000d87d7c0 with size: 0.000183 MiB 00:06:10.000 element at address: 0x20000d87d880 with size: 0.000183 MiB 00:06:10.000 element at address: 0x20000d87d940 with size: 0.000183 MiB 00:06:10.000 element at address: 0x20000d87da00 with size: 0.000183 MiB 00:06:10.000 element at address: 0x20000d87dac0 with size: 0.000183 MiB 00:06:10.000 element at address: 0x20000d8fdd80 with size: 0.000183 MiB 00:06:10.000 element at address: 0x200015ef44c0 with size: 0.000183 MiB 00:06:10.000 element at address: 0x20001c0efc40 with size: 0.000183 MiB 00:06:10.000 element at address: 0x20001c0efd00 with size: 0.000183 MiB 00:06:10.000 element at address: 0x20001c2bc740 with size: 0.000183 MiB 00:06:10.000 element at address: 0x20001d891480 with size: 0.000183 MiB 00:06:10.000 element at address: 0x20001d891540 with size: 0.000183 MiB 00:06:10.000 element at address: 0x20001d891600 with size: 0.000183 MiB 00:06:10.000 element at address: 0x20001d8916c0 with size: 0.000183 MiB 00:06:10.000 element at address: 0x20001d891780 with size: 0.000183 MiB 00:06:10.000 element at address: 0x20001d891840 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20001d891900 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20001d8919c0 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20001d891a80 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20001d891b40 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20001d891c00 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20001d891cc0 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20001d891d80 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20001d891e40 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20001d891f00 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20001d891fc0 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20001d892080 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20001d892140 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20001d892200 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20001d8922c0 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20001d892380 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20001d892440 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20001d892500 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20001d8925c0 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20001d892680 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20001d892740 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20001d892800 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20001d8928c0 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20001d892980 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20001d892a40 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20001d892b00 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20001d892bc0 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20001d892c80 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20001d892d40 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20001d892e00 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20001d892ec0 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20001d892f80 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20001d893040 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20001d893100 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20001d8931c0 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20001d893280 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20001d893340 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20001d893400 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20001d8934c0 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20001d893580 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20001d893640 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20001d893700 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20001d8937c0 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20001d893880 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20001d893940 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20001d893a00 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20001d893ac0 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20001d893b80 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20001d893c40 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20001d893d00 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20001d893dc0 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20001d893e80 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20001d893f40 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20001d894000 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20001d8940c0 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20001d894180 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20001d894240 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20001d894300 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20001d8943c0 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20001d894480 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20001d894540 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20001d894600 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20001d8946c0 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20001d894780 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20001d894840 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20001d894900 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20001d8949c0 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20001d894a80 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20001d894b40 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20001d894c00 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20001d894cc0 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20001d894d80 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20001d894e40 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20001d894f00 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20001d894fc0 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20001d895080 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20001d895140 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20001d895200 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20001d8952c0 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20001d895380 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20001d895440 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20002ac65500 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20002ac655c0 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20002ac6c1c0 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20002ac6c3c0 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20002ac6c480 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20002ac6c540 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20002ac6c600 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20002ac6c6c0 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20002ac6c780 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20002ac6c840 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20002ac6c900 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20002ac6c9c0 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20002ac6ca80 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20002ac6cb40 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20002ac6cc00 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20002ac6ccc0 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20002ac6cd80 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20002ac6ce40 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20002ac6cf00 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20002ac6cfc0 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20002ac6d080 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20002ac6d140 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20002ac6d200 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20002ac6d2c0 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20002ac6d380 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20002ac6d440 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20002ac6d500 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20002ac6d5c0 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20002ac6d680 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20002ac6d740 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20002ac6d800 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20002ac6d8c0 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20002ac6d980 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20002ac6da40 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20002ac6db00 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20002ac6dbc0 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20002ac6dc80 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20002ac6dd40 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20002ac6de00 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20002ac6dec0 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20002ac6df80 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20002ac6e040 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20002ac6e100 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20002ac6e1c0 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20002ac6e280 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20002ac6e340 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20002ac6e400 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20002ac6e4c0 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20002ac6e580 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20002ac6e640 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20002ac6e700 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20002ac6e7c0 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20002ac6e880 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20002ac6e940 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20002ac6ea00 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20002ac6eac0 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20002ac6eb80 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20002ac6ec40 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20002ac6ed00 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20002ac6edc0 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20002ac6ee80 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20002ac6ef40 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20002ac6f000 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20002ac6f0c0 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20002ac6f180 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20002ac6f240 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20002ac6f300 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20002ac6f3c0 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20002ac6f480 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20002ac6f540 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20002ac6f600 with size: 0.000183 MiB 00:06:10.001 element at address: 0x20002ac6f6c0 with size: 0.000183 MiB 00:06:10.002 element at address: 0x20002ac6f780 with size: 0.000183 MiB 00:06:10.002 element at address: 0x20002ac6f840 with size: 0.000183 MiB 00:06:10.002 element at address: 0x20002ac6f900 with size: 0.000183 MiB 00:06:10.002 element at address: 0x20002ac6f9c0 with size: 0.000183 MiB 00:06:10.002 element at address: 0x20002ac6fa80 with size: 0.000183 MiB 00:06:10.002 element at address: 0x20002ac6fb40 with size: 0.000183 MiB 00:06:10.002 element at address: 0x20002ac6fc00 with size: 0.000183 MiB 00:06:10.002 element at address: 0x20002ac6fcc0 with size: 0.000183 MiB 00:06:10.002 element at address: 0x20002ac6fd80 with size: 0.000183 MiB 00:06:10.002 element at address: 0x20002ac6fe40 with size: 0.000183 MiB 00:06:10.002 element at address: 0x20002ac6ff00 with size: 0.000183 MiB 00:06:10.002 list of memzone associated elements. size: 646.796692 MiB 00:06:10.002 element at address: 0x20001d895500 with size: 211.416748 MiB 00:06:10.002 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:10.002 element at address: 0x20002ac6ffc0 with size: 157.562561 MiB 00:06:10.002 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:10.002 element at address: 0x200015ff4780 with size: 92.045044 MiB 00:06:10.002 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_70660_0 00:06:10.002 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:10.002 associated memzone info: size: 48.002930 MiB name: MP_evtpool_70660_0 00:06:10.002 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:10.002 associated memzone info: size: 48.002930 MiB name: MP_msgpool_70660_0 00:06:10.002 element at address: 0x2000071fdb80 with size: 36.008911 MiB 00:06:10.002 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_70660_0 00:06:10.002 element at address: 0x20001c3be940 with size: 20.255554 MiB 00:06:10.002 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:10.002 element at address: 0x200034bfeb40 with size: 18.005066 MiB 00:06:10.002 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:10.002 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:10.002 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_70660 00:06:10.002 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:10.002 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_70660 00:06:10.002 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:10.002 associated memzone info: size: 1.007996 MiB name: MP_evtpool_70660 00:06:10.002 element at address: 0x20000d8fde40 with size: 1.008118 MiB 00:06:10.002 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:10.002 element at address: 0x20001c2bc800 with size: 1.008118 MiB 00:06:10.002 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:10.002 element at address: 0x2000096fde40 with size: 1.008118 MiB 00:06:10.002 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:10.002 element at address: 0x2000070fba40 with size: 1.008118 MiB 00:06:10.002 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:10.002 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:10.002 associated memzone info: size: 1.000366 MiB name: RG_ring_0_70660 00:06:10.002 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:10.002 associated memzone info: size: 1.000366 MiB name: RG_ring_1_70660 00:06:10.002 element at address: 0x200015ef4580 with size: 1.000488 MiB 00:06:10.002 associated memzone info: size: 1.000366 MiB name: RG_ring_4_70660 00:06:10.002 element at address: 0x200034afe940 with size: 1.000488 MiB 00:06:10.002 associated memzone info: size: 1.000366 MiB name: RG_ring_5_70660 00:06:10.002 element at address: 0x200003a7f680 with size: 0.500488 MiB 00:06:10.002 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_70660 00:06:10.002 element at address: 0x200003e7eec0 with size: 0.500488 MiB 00:06:10.002 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_70660 00:06:10.002 element at address: 0x20000d87db80 with size: 0.500488 MiB 00:06:10.002 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:10.002 element at address: 0x20000707b780 with size: 0.500488 MiB 00:06:10.002 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:10.002 element at address: 0x20001c27c540 with size: 0.250488 MiB 00:06:10.002 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:10.002 element at address: 0x200003a5eac0 with size: 0.125488 MiB 00:06:10.002 associated memzone info: size: 0.125366 MiB name: RG_ring_2_70660 00:06:10.002 element at address: 0x2000096f5b80 with size: 0.031738 MiB 00:06:10.002 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:10.002 element at address: 0x20002ac65680 with size: 0.023743 MiB 00:06:10.002 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:10.002 element at address: 0x200003a5a800 with size: 0.016113 MiB 00:06:10.002 associated memzone info: size: 0.015991 MiB name: RG_ring_3_70660 00:06:10.002 element at address: 0x20002ac6b7c0 with size: 0.002441 MiB 00:06:10.002 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:10.002 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:06:10.002 associated memzone info: size: 0.000183 MiB name: MP_msgpool_70660 00:06:10.002 element at address: 0x200003aff940 with size: 0.000305 MiB 00:06:10.002 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_70660 00:06:10.002 element at address: 0x200003a5a600 with size: 0.000305 MiB 00:06:10.002 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_70660 00:06:10.002 element at address: 0x20002ac6c280 with size: 0.000305 MiB 00:06:10.002 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:10.002 05:08:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:10.002 05:08:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 70660 00:06:10.002 05:08:03 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 70660 ']' 00:06:10.002 05:08:03 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 70660 00:06:10.002 05:08:03 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:06:10.002 05:08:03 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:10.002 05:08:03 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70660 00:06:10.002 killing process with pid 70660 00:06:10.002 05:08:03 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:10.002 05:08:03 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:10.002 05:08:03 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70660' 00:06:10.002 05:08:03 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 70660 00:06:10.002 05:08:03 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 70660 00:06:10.263 ************************************ 00:06:10.263 END TEST dpdk_mem_utility 00:06:10.263 ************************************ 00:06:10.263 00:06:10.263 real 0m1.422s 00:06:10.263 user 0m1.470s 00:06:10.263 sys 0m0.347s 00:06:10.263 05:08:03 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:10.263 05:08:03 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:10.263 05:08:03 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:10.263 05:08:03 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:10.263 05:08:03 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:10.263 05:08:03 -- common/autotest_common.sh@10 -- # set +x 00:06:10.263 ************************************ 00:06:10.263 START TEST event 00:06:10.263 ************************************ 00:06:10.263 05:08:03 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:10.263 * Looking for test storage... 00:06:10.263 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:10.263 05:08:03 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:10.263 05:08:03 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:10.263 05:08:03 event -- common/autotest_common.sh@1681 -- # lcov --version 00:06:10.263 05:08:03 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:10.263 05:08:03 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:10.263 05:08:03 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:10.263 05:08:03 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:10.263 05:08:03 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:10.263 05:08:03 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:10.264 05:08:03 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:10.264 05:08:03 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:10.264 05:08:03 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:10.264 05:08:03 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:10.264 05:08:03 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:10.264 05:08:03 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:10.264 05:08:03 event -- scripts/common.sh@344 -- # case "$op" in 00:06:10.264 05:08:03 event -- scripts/common.sh@345 -- # : 1 00:06:10.264 05:08:03 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:10.264 05:08:03 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:10.264 05:08:03 event -- scripts/common.sh@365 -- # decimal 1 00:06:10.264 05:08:03 event -- scripts/common.sh@353 -- # local d=1 00:06:10.264 05:08:03 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:10.264 05:08:03 event -- scripts/common.sh@355 -- # echo 1 00:06:10.264 05:08:03 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:10.264 05:08:03 event -- scripts/common.sh@366 -- # decimal 2 00:06:10.264 05:08:03 event -- scripts/common.sh@353 -- # local d=2 00:06:10.264 05:08:03 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:10.264 05:08:03 event -- scripts/common.sh@355 -- # echo 2 00:06:10.264 05:08:03 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:10.264 05:08:03 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:10.264 05:08:03 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:10.264 05:08:03 event -- scripts/common.sh@368 -- # return 0 00:06:10.264 05:08:03 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:10.264 05:08:03 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:10.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.264 --rc genhtml_branch_coverage=1 00:06:10.264 --rc genhtml_function_coverage=1 00:06:10.264 --rc genhtml_legend=1 00:06:10.264 --rc geninfo_all_blocks=1 00:06:10.264 --rc geninfo_unexecuted_blocks=1 00:06:10.264 00:06:10.264 ' 00:06:10.264 05:08:03 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:10.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.264 --rc genhtml_branch_coverage=1 00:06:10.264 --rc genhtml_function_coverage=1 00:06:10.264 --rc genhtml_legend=1 00:06:10.264 --rc geninfo_all_blocks=1 00:06:10.264 --rc geninfo_unexecuted_blocks=1 00:06:10.264 00:06:10.264 ' 00:06:10.264 05:08:03 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:10.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.264 --rc genhtml_branch_coverage=1 00:06:10.264 --rc genhtml_function_coverage=1 00:06:10.264 --rc genhtml_legend=1 00:06:10.264 --rc geninfo_all_blocks=1 00:06:10.264 --rc geninfo_unexecuted_blocks=1 00:06:10.264 00:06:10.264 ' 00:06:10.264 05:08:03 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:10.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.264 --rc genhtml_branch_coverage=1 00:06:10.264 --rc genhtml_function_coverage=1 00:06:10.264 --rc genhtml_legend=1 00:06:10.264 --rc geninfo_all_blocks=1 00:06:10.264 --rc geninfo_unexecuted_blocks=1 00:06:10.264 00:06:10.264 ' 00:06:10.264 05:08:03 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:10.264 05:08:03 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:10.264 05:08:03 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:10.264 05:08:03 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:06:10.264 05:08:03 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:10.264 05:08:03 event -- common/autotest_common.sh@10 -- # set +x 00:06:10.524 ************************************ 00:06:10.524 START TEST event_perf 00:06:10.524 ************************************ 00:06:10.524 05:08:03 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:10.524 Running I/O for 1 seconds...[2024-11-10 05:08:03.530046] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:10.524 [2024-11-10 05:08:03.530320] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70740 ] 00:06:10.524 [2024-11-10 05:08:03.692339] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:10.524 [2024-11-10 05:08:03.725671] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.524 Running I/O for 1 seconds...[2024-11-10 05:08:03.725977] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:10.524 [2024-11-10 05:08:03.726159] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.524 [2024-11-10 05:08:03.726258] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:11.910 00:06:11.910 lcore 0: 193322 00:06:11.910 lcore 1: 193317 00:06:11.910 lcore 2: 193320 00:06:11.910 lcore 3: 193321 00:06:11.910 done. 00:06:11.910 00:06:11.910 real 0m1.284s 00:06:11.910 user 0m4.076s 00:06:11.910 sys 0m0.084s 00:06:11.910 05:08:04 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:11.910 ************************************ 00:06:11.910 END TEST event_perf 00:06:11.910 ************************************ 00:06:11.910 05:08:04 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:11.910 05:08:04 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:11.910 05:08:04 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:11.910 05:08:04 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:11.910 05:08:04 event -- common/autotest_common.sh@10 -- # set +x 00:06:11.910 ************************************ 00:06:11.910 START TEST event_reactor 00:06:11.910 ************************************ 00:06:11.910 05:08:04 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:11.910 [2024-11-10 05:08:04.850358] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:11.910 [2024-11-10 05:08:04.850537] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70780 ] 00:06:11.910 [2024-11-10 05:08:04.980715] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.910 [2024-11-10 05:08:05.013023] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.895 test_start 00:06:12.895 oneshot 00:06:12.895 tick 100 00:06:12.895 tick 100 00:06:12.895 tick 250 00:06:12.895 tick 100 00:06:12.895 tick 100 00:06:12.895 tick 100 00:06:12.895 tick 250 00:06:12.895 tick 500 00:06:12.895 tick 100 00:06:12.895 tick 100 00:06:12.895 tick 250 00:06:12.895 tick 100 00:06:12.895 tick 100 00:06:12.895 test_end 00:06:12.895 00:06:12.895 real 0m1.244s 00:06:12.895 user 0m1.072s 00:06:12.895 sys 0m0.066s 00:06:12.895 05:08:06 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:12.895 05:08:06 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:12.895 ************************************ 00:06:12.895 END TEST event_reactor 00:06:12.895 ************************************ 00:06:12.895 05:08:06 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:12.895 05:08:06 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:12.895 05:08:06 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:12.895 05:08:06 event -- common/autotest_common.sh@10 -- # set +x 00:06:12.895 ************************************ 00:06:12.895 START TEST event_reactor_perf 00:06:12.895 ************************************ 00:06:12.895 05:08:06 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:13.156 [2024-11-10 05:08:06.144016] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:13.156 [2024-11-10 05:08:06.144229] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70811 ] 00:06:13.156 [2024-11-10 05:08:06.289520] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.156 [2024-11-10 05:08:06.323381] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.539 test_start 00:06:14.539 test_end 00:06:14.539 Performance: 423230 events per second 00:06:14.539 00:06:14.539 real 0m1.258s 00:06:14.539 user 0m1.075s 00:06:14.539 sys 0m0.076s 00:06:14.539 05:08:07 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:14.539 ************************************ 00:06:14.539 END TEST event_reactor_perf 00:06:14.539 ************************************ 00:06:14.539 05:08:07 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:14.539 05:08:07 event -- event/event.sh@49 -- # uname -s 00:06:14.539 05:08:07 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:14.539 05:08:07 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:14.539 05:08:07 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:14.539 05:08:07 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:14.539 05:08:07 event -- common/autotest_common.sh@10 -- # set +x 00:06:14.539 ************************************ 00:06:14.539 START TEST event_scheduler 00:06:14.539 ************************************ 00:06:14.539 05:08:07 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:14.539 * Looking for test storage... 00:06:14.539 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:14.539 05:08:07 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:14.539 05:08:07 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:14.539 05:08:07 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:06:14.539 05:08:07 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:14.539 05:08:07 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:14.539 05:08:07 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:14.539 05:08:07 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:14.539 05:08:07 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:14.539 05:08:07 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:14.539 05:08:07 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:14.539 05:08:07 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:14.539 05:08:07 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:14.539 05:08:07 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:14.539 05:08:07 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:14.539 05:08:07 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:14.539 05:08:07 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:14.539 05:08:07 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:14.539 05:08:07 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:14.539 05:08:07 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:14.539 05:08:07 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:14.539 05:08:07 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:14.539 05:08:07 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:14.539 05:08:07 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:14.539 05:08:07 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:14.539 05:08:07 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:14.539 05:08:07 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:14.539 05:08:07 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:14.539 05:08:07 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:14.539 05:08:07 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:14.539 05:08:07 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:14.539 05:08:07 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:14.539 05:08:07 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:14.539 05:08:07 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:14.539 05:08:07 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:14.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.539 --rc genhtml_branch_coverage=1 00:06:14.539 --rc genhtml_function_coverage=1 00:06:14.539 --rc genhtml_legend=1 00:06:14.539 --rc geninfo_all_blocks=1 00:06:14.539 --rc geninfo_unexecuted_blocks=1 00:06:14.539 00:06:14.539 ' 00:06:14.539 05:08:07 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:14.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.539 --rc genhtml_branch_coverage=1 00:06:14.539 --rc genhtml_function_coverage=1 00:06:14.539 --rc genhtml_legend=1 00:06:14.539 --rc geninfo_all_blocks=1 00:06:14.540 --rc geninfo_unexecuted_blocks=1 00:06:14.540 00:06:14.540 ' 00:06:14.540 05:08:07 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:14.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.540 --rc genhtml_branch_coverage=1 00:06:14.540 --rc genhtml_function_coverage=1 00:06:14.540 --rc genhtml_legend=1 00:06:14.540 --rc geninfo_all_blocks=1 00:06:14.540 --rc geninfo_unexecuted_blocks=1 00:06:14.540 00:06:14.540 ' 00:06:14.540 05:08:07 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:14.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.540 --rc genhtml_branch_coverage=1 00:06:14.540 --rc genhtml_function_coverage=1 00:06:14.540 --rc genhtml_legend=1 00:06:14.540 --rc geninfo_all_blocks=1 00:06:14.540 --rc geninfo_unexecuted_blocks=1 00:06:14.540 00:06:14.540 ' 00:06:14.540 05:08:07 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:14.540 05:08:07 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=70881 00:06:14.540 05:08:07 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:14.540 05:08:07 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 70881 00:06:14.540 05:08:07 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 70881 ']' 00:06:14.540 05:08:07 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:14.540 05:08:07 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.540 05:08:07 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:14.540 05:08:07 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.540 05:08:07 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:14.540 05:08:07 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:14.540 [2024-11-10 05:08:07.633367] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:14.540 [2024-11-10 05:08:07.633785] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70881 ] 00:06:14.801 [2024-11-10 05:08:07.784315] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:14.801 [2024-11-10 05:08:07.821141] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.801 [2024-11-10 05:08:07.821485] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:14.801 [2024-11-10 05:08:07.821744] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:14.801 [2024-11-10 05:08:07.821793] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:15.370 05:08:08 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:15.370 05:08:08 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:06:15.370 05:08:08 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:15.371 05:08:08 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.371 05:08:08 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:15.371 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:15.371 POWER: Cannot set governor of lcore 0 to userspace 00:06:15.371 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:15.371 POWER: Cannot set governor of lcore 0 to performance 00:06:15.371 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:15.371 POWER: Cannot set governor of lcore 0 to userspace 00:06:15.371 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:15.371 POWER: Cannot set governor of lcore 0 to userspace 00:06:15.371 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:15.371 POWER: Unable to set Power Management Environment for lcore 0 00:06:15.371 [2024-11-10 05:08:08.483134] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:06:15.371 [2024-11-10 05:08:08.483154] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:06:15.371 [2024-11-10 05:08:08.483162] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:15.371 [2024-11-10 05:08:08.483177] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:15.371 [2024-11-10 05:08:08.483184] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:15.371 [2024-11-10 05:08:08.483194] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:15.371 05:08:08 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.371 05:08:08 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:15.371 05:08:08 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.371 05:08:08 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:15.371 [2024-11-10 05:08:08.536934] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:15.371 05:08:08 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.371 05:08:08 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:15.371 05:08:08 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:15.371 05:08:08 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:15.371 05:08:08 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:15.371 ************************************ 00:06:15.371 START TEST scheduler_create_thread 00:06:15.371 ************************************ 00:06:15.371 05:08:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:06:15.371 05:08:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:15.371 05:08:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.371 05:08:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.371 2 00:06:15.371 05:08:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.371 05:08:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:15.371 05:08:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.371 05:08:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.371 3 00:06:15.371 05:08:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.371 05:08:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:15.371 05:08:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.371 05:08:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.371 4 00:06:15.371 05:08:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.371 05:08:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:15.371 05:08:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.371 05:08:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.371 5 00:06:15.371 05:08:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.371 05:08:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:15.371 05:08:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.371 05:08:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.371 6 00:06:15.371 05:08:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.371 05:08:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:15.371 05:08:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.371 05:08:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.371 7 00:06:15.371 05:08:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.371 05:08:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:15.371 05:08:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.371 05:08:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.632 8 00:06:15.632 05:08:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.632 05:08:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:15.632 05:08:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.632 05:08:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.632 9 00:06:15.632 05:08:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.632 05:08:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:15.632 05:08:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.632 05:08:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.632 10 00:06:15.632 05:08:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.632 05:08:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:15.632 05:08:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.632 05:08:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.632 05:08:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.632 05:08:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:15.632 05:08:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:15.632 05:08:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.632 05:08:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.632 05:08:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.632 05:08:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:15.632 05:08:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.632 05:08:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:17.040 05:08:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:17.040 05:08:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:17.040 05:08:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:17.040 05:08:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:17.040 05:08:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:17.980 ************************************ 00:06:17.980 END TEST scheduler_create_thread 00:06:17.980 ************************************ 00:06:17.980 05:08:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:17.980 00:06:17.980 real 0m2.614s 00:06:17.980 user 0m0.015s 00:06:17.980 sys 0m0.004s 00:06:17.980 05:08:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:17.980 05:08:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:17.980 05:08:11 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:17.980 05:08:11 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 70881 00:06:17.980 05:08:11 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 70881 ']' 00:06:17.980 05:08:11 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 70881 00:06:17.980 05:08:11 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:06:17.980 05:08:11 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:17.980 05:08:11 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70881 00:06:18.241 killing process with pid 70881 00:06:18.241 05:08:11 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:18.241 05:08:11 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:18.241 05:08:11 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70881' 00:06:18.241 05:08:11 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 70881 00:06:18.241 05:08:11 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 70881 00:06:18.502 [2024-11-10 05:08:11.640477] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:18.763 ************************************ 00:06:18.763 END TEST event_scheduler 00:06:18.763 ************************************ 00:06:18.763 00:06:18.763 real 0m4.372s 00:06:18.763 user 0m8.010s 00:06:18.763 sys 0m0.343s 00:06:18.763 05:08:11 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:18.763 05:08:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:18.763 05:08:11 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:18.763 05:08:11 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:18.763 05:08:11 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:18.763 05:08:11 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:18.763 05:08:11 event -- common/autotest_common.sh@10 -- # set +x 00:06:18.763 ************************************ 00:06:18.763 START TEST app_repeat 00:06:18.763 ************************************ 00:06:18.763 05:08:11 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:06:18.763 05:08:11 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.763 05:08:11 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.763 05:08:11 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:18.763 05:08:11 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:18.763 05:08:11 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:18.763 05:08:11 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:18.763 05:08:11 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:18.763 05:08:11 event.app_repeat -- event/event.sh@19 -- # repeat_pid=70982 00:06:18.763 05:08:11 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:18.763 Process app_repeat pid: 70982 00:06:18.763 05:08:11 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 70982' 00:06:18.763 spdk_app_start Round 0 00:06:18.763 05:08:11 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:18.763 05:08:11 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:18.763 05:08:11 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70982 /var/tmp/spdk-nbd.sock 00:06:18.763 05:08:11 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:18.763 05:08:11 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70982 ']' 00:06:18.763 05:08:11 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:18.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:18.763 05:08:11 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:18.763 05:08:11 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:18.763 05:08:11 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:18.763 05:08:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:18.763 [2024-11-10 05:08:11.871697] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:18.763 [2024-11-10 05:08:11.871880] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70982 ] 00:06:19.022 [2024-11-10 05:08:12.014908] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:19.022 [2024-11-10 05:08:12.047159] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:19.022 [2024-11-10 05:08:12.047248] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.592 05:08:12 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:19.592 05:08:12 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:19.592 05:08:12 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:19.852 Malloc0 00:06:19.852 05:08:12 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:20.112 Malloc1 00:06:20.112 05:08:13 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:20.112 05:08:13 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.112 05:08:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:20.112 05:08:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:20.112 05:08:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.112 05:08:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:20.112 05:08:13 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:20.112 05:08:13 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.112 05:08:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:20.112 05:08:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:20.112 05:08:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.112 05:08:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:20.112 05:08:13 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:20.112 05:08:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:20.112 05:08:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:20.112 05:08:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:20.372 /dev/nbd0 00:06:20.372 05:08:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:20.372 05:08:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:20.372 05:08:13 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:20.372 05:08:13 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:20.372 05:08:13 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:20.372 05:08:13 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:20.372 05:08:13 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:20.372 05:08:13 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:20.372 05:08:13 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:20.372 05:08:13 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:20.372 05:08:13 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:20.372 1+0 records in 00:06:20.372 1+0 records out 00:06:20.372 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000371441 s, 11.0 MB/s 00:06:20.372 05:08:13 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:20.372 05:08:13 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:20.372 05:08:13 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:20.372 05:08:13 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:20.372 05:08:13 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:20.372 05:08:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:20.372 05:08:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:20.372 05:08:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:20.632 /dev/nbd1 00:06:20.632 05:08:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:20.632 05:08:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:20.632 05:08:13 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:20.632 05:08:13 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:20.632 05:08:13 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:20.632 05:08:13 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:20.632 05:08:13 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:20.632 05:08:13 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:20.632 05:08:13 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:20.632 05:08:13 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:20.632 05:08:13 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:20.632 1+0 records in 00:06:20.632 1+0 records out 00:06:20.632 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000172969 s, 23.7 MB/s 00:06:20.632 05:08:13 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:20.632 05:08:13 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:20.632 05:08:13 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:20.632 05:08:13 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:20.632 05:08:13 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:20.632 05:08:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:20.632 05:08:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:20.632 05:08:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:20.632 05:08:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.632 05:08:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:20.632 05:08:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:20.632 { 00:06:20.632 "nbd_device": "/dev/nbd0", 00:06:20.632 "bdev_name": "Malloc0" 00:06:20.632 }, 00:06:20.632 { 00:06:20.632 "nbd_device": "/dev/nbd1", 00:06:20.632 "bdev_name": "Malloc1" 00:06:20.632 } 00:06:20.632 ]' 00:06:20.632 05:08:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:20.632 { 00:06:20.632 "nbd_device": "/dev/nbd0", 00:06:20.632 "bdev_name": "Malloc0" 00:06:20.632 }, 00:06:20.632 { 00:06:20.632 "nbd_device": "/dev/nbd1", 00:06:20.632 "bdev_name": "Malloc1" 00:06:20.632 } 00:06:20.632 ]' 00:06:20.632 05:08:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:20.890 05:08:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:20.890 /dev/nbd1' 00:06:20.890 05:08:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:20.890 /dev/nbd1' 00:06:20.890 05:08:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:20.890 05:08:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:20.890 05:08:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:20.890 05:08:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:20.890 05:08:13 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:20.890 05:08:13 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:20.890 05:08:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.890 05:08:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:20.890 05:08:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:20.890 05:08:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:20.890 05:08:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:20.890 05:08:13 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:20.890 256+0 records in 00:06:20.890 256+0 records out 00:06:20.890 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00474046 s, 221 MB/s 00:06:20.890 05:08:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:20.890 05:08:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:20.890 256+0 records in 00:06:20.890 256+0 records out 00:06:20.890 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.014741 s, 71.1 MB/s 00:06:20.890 05:08:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:20.890 05:08:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:20.890 256+0 records in 00:06:20.890 256+0 records out 00:06:20.890 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0169888 s, 61.7 MB/s 00:06:20.890 05:08:13 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:20.890 05:08:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.890 05:08:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:20.890 05:08:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:20.890 05:08:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:20.890 05:08:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:20.890 05:08:13 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:20.890 05:08:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:20.890 05:08:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:20.890 05:08:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:20.890 05:08:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:20.890 05:08:13 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:20.890 05:08:13 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:20.890 05:08:13 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.890 05:08:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.890 05:08:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:20.890 05:08:13 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:20.890 05:08:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:20.890 05:08:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:21.177 05:08:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:21.177 05:08:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:21.177 05:08:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:21.177 05:08:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:21.177 05:08:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:21.177 05:08:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:21.177 05:08:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:21.177 05:08:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:21.177 05:08:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:21.177 05:08:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:21.177 05:08:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:21.177 05:08:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:21.177 05:08:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:21.177 05:08:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:21.177 05:08:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:21.177 05:08:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:21.177 05:08:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:21.177 05:08:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:21.177 05:08:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:21.177 05:08:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.177 05:08:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:21.437 05:08:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:21.437 05:08:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:21.437 05:08:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:21.437 05:08:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:21.437 05:08:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:21.437 05:08:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:21.437 05:08:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:21.438 05:08:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:21.438 05:08:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:21.438 05:08:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:21.438 05:08:14 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:21.438 05:08:14 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:21.438 05:08:14 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:21.697 05:08:14 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:21.697 [2024-11-10 05:08:14.915330] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:21.955 [2024-11-10 05:08:14.944245] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.955 [2024-11-10 05:08:14.944354] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.955 [2024-11-10 05:08:14.973494] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:21.955 [2024-11-10 05:08:14.973542] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:25.254 05:08:17 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:25.254 spdk_app_start Round 1 00:06:25.254 05:08:17 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:25.254 05:08:17 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70982 /var/tmp/spdk-nbd.sock 00:06:25.254 05:08:17 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70982 ']' 00:06:25.254 05:08:17 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:25.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:25.254 05:08:17 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:25.254 05:08:17 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:25.254 05:08:17 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:25.254 05:08:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:25.254 05:08:18 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:25.254 05:08:18 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:25.254 05:08:18 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:25.254 Malloc0 00:06:25.254 05:08:18 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:25.254 Malloc1 00:06:25.254 05:08:18 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:25.254 05:08:18 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.254 05:08:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:25.254 05:08:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:25.254 05:08:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:25.254 05:08:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:25.254 05:08:18 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:25.254 05:08:18 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.254 05:08:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:25.254 05:08:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:25.254 05:08:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:25.254 05:08:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:25.254 05:08:18 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:25.254 05:08:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:25.254 05:08:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:25.254 05:08:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:25.515 /dev/nbd0 00:06:25.515 05:08:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:25.515 05:08:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:25.515 05:08:18 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:25.515 05:08:18 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:25.515 05:08:18 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:25.515 05:08:18 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:25.515 05:08:18 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:25.515 05:08:18 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:25.515 05:08:18 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:25.515 05:08:18 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:25.515 05:08:18 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:25.515 1+0 records in 00:06:25.515 1+0 records out 00:06:25.515 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000199379 s, 20.5 MB/s 00:06:25.515 05:08:18 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:25.515 05:08:18 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:25.515 05:08:18 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:25.515 05:08:18 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:25.515 05:08:18 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:25.515 05:08:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:25.515 05:08:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:25.515 05:08:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:25.775 /dev/nbd1 00:06:25.775 05:08:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:25.775 05:08:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:25.775 05:08:18 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:25.775 05:08:18 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:25.775 05:08:18 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:25.775 05:08:18 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:25.775 05:08:18 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:25.775 05:08:18 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:25.775 05:08:18 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:25.775 05:08:18 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:25.775 05:08:18 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:25.775 1+0 records in 00:06:25.775 1+0 records out 00:06:25.775 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000211354 s, 19.4 MB/s 00:06:25.775 05:08:18 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:25.775 05:08:18 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:25.775 05:08:18 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:25.775 05:08:18 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:25.775 05:08:18 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:25.775 05:08:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:25.775 05:08:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:25.775 05:08:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:25.775 05:08:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.775 05:08:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:26.037 05:08:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:26.037 { 00:06:26.037 "nbd_device": "/dev/nbd0", 00:06:26.037 "bdev_name": "Malloc0" 00:06:26.037 }, 00:06:26.037 { 00:06:26.037 "nbd_device": "/dev/nbd1", 00:06:26.037 "bdev_name": "Malloc1" 00:06:26.037 } 00:06:26.037 ]' 00:06:26.037 05:08:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:26.037 { 00:06:26.037 "nbd_device": "/dev/nbd0", 00:06:26.037 "bdev_name": "Malloc0" 00:06:26.037 }, 00:06:26.037 { 00:06:26.037 "nbd_device": "/dev/nbd1", 00:06:26.037 "bdev_name": "Malloc1" 00:06:26.037 } 00:06:26.037 ]' 00:06:26.037 05:08:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:26.037 05:08:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:26.037 /dev/nbd1' 00:06:26.037 05:08:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:26.037 /dev/nbd1' 00:06:26.037 05:08:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:26.037 05:08:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:26.037 05:08:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:26.037 05:08:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:26.037 05:08:19 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:26.037 05:08:19 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:26.037 05:08:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.037 05:08:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:26.037 05:08:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:26.037 05:08:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:26.037 05:08:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:26.037 05:08:19 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:26.037 256+0 records in 00:06:26.037 256+0 records out 00:06:26.037 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00830161 s, 126 MB/s 00:06:26.037 05:08:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:26.037 05:08:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:26.037 256+0 records in 00:06:26.037 256+0 records out 00:06:26.037 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0147347 s, 71.2 MB/s 00:06:26.037 05:08:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:26.037 05:08:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:26.037 256+0 records in 00:06:26.037 256+0 records out 00:06:26.037 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.018228 s, 57.5 MB/s 00:06:26.037 05:08:19 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:26.037 05:08:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.037 05:08:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:26.037 05:08:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:26.037 05:08:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:26.037 05:08:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:26.037 05:08:19 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:26.037 05:08:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:26.037 05:08:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:26.037 05:08:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:26.037 05:08:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:26.037 05:08:19 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:26.037 05:08:19 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:26.037 05:08:19 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.037 05:08:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.037 05:08:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:26.037 05:08:19 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:26.037 05:08:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:26.037 05:08:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:26.298 05:08:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:26.298 05:08:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:26.298 05:08:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:26.298 05:08:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:26.298 05:08:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:26.298 05:08:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:26.298 05:08:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:26.298 05:08:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:26.298 05:08:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:26.298 05:08:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:26.562 05:08:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:26.562 05:08:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:26.562 05:08:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:26.562 05:08:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:26.562 05:08:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:26.562 05:08:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:26.562 05:08:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:26.562 05:08:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:26.562 05:08:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:26.562 05:08:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.562 05:08:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:26.821 05:08:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:26.821 05:08:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:26.821 05:08:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:26.821 05:08:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:26.821 05:08:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:26.821 05:08:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:26.821 05:08:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:26.821 05:08:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:26.821 05:08:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:26.821 05:08:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:26.821 05:08:19 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:26.821 05:08:19 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:26.821 05:08:19 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:27.079 05:08:20 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:27.079 [2024-11-10 05:08:20.160266] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:27.079 [2024-11-10 05:08:20.187059] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.079 [2024-11-10 05:08:20.187077] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:27.079 [2024-11-10 05:08:20.216294] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:27.079 [2024-11-10 05:08:20.216339] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:30.374 spdk_app_start Round 2 00:06:30.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:30.374 05:08:23 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:30.374 05:08:23 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:30.374 05:08:23 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70982 /var/tmp/spdk-nbd.sock 00:06:30.374 05:08:23 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70982 ']' 00:06:30.374 05:08:23 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:30.374 05:08:23 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:30.374 05:08:23 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:30.374 05:08:23 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:30.374 05:08:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:30.374 05:08:23 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:30.374 05:08:23 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:30.374 05:08:23 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:30.374 Malloc0 00:06:30.374 05:08:23 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:30.635 Malloc1 00:06:30.635 05:08:23 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:30.635 05:08:23 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.635 05:08:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:30.635 05:08:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:30.635 05:08:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.635 05:08:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:30.635 05:08:23 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:30.635 05:08:23 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.635 05:08:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:30.635 05:08:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:30.635 05:08:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.635 05:08:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:30.635 05:08:23 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:30.635 05:08:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:30.635 05:08:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:30.635 05:08:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:30.635 /dev/nbd0 00:06:30.896 05:08:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:30.896 05:08:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:30.896 05:08:23 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:30.896 05:08:23 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:30.896 05:08:23 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:30.896 05:08:23 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:30.896 05:08:23 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:30.896 05:08:23 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:30.896 05:08:23 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:30.896 05:08:23 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:30.896 05:08:23 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:30.896 1+0 records in 00:06:30.896 1+0 records out 00:06:30.896 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000126871 s, 32.3 MB/s 00:06:30.896 05:08:23 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:30.896 05:08:23 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:30.896 05:08:23 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:30.896 05:08:23 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:30.896 05:08:23 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:30.896 05:08:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:30.896 05:08:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:30.896 05:08:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:30.896 /dev/nbd1 00:06:30.896 05:08:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:30.896 05:08:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:30.896 05:08:24 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:30.896 05:08:24 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:30.896 05:08:24 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:30.896 05:08:24 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:30.896 05:08:24 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:30.896 05:08:24 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:30.896 05:08:24 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:30.896 05:08:24 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:30.896 05:08:24 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:30.896 1+0 records in 00:06:30.896 1+0 records out 00:06:30.896 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000284244 s, 14.4 MB/s 00:06:30.896 05:08:24 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:31.157 05:08:24 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:31.157 05:08:24 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:31.157 05:08:24 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:31.157 05:08:24 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:31.157 05:08:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:31.157 05:08:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:31.157 05:08:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:31.157 05:08:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.157 05:08:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:31.157 05:08:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:31.157 { 00:06:31.157 "nbd_device": "/dev/nbd0", 00:06:31.157 "bdev_name": "Malloc0" 00:06:31.157 }, 00:06:31.157 { 00:06:31.157 "nbd_device": "/dev/nbd1", 00:06:31.157 "bdev_name": "Malloc1" 00:06:31.157 } 00:06:31.157 ]' 00:06:31.157 05:08:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:31.157 { 00:06:31.157 "nbd_device": "/dev/nbd0", 00:06:31.157 "bdev_name": "Malloc0" 00:06:31.157 }, 00:06:31.157 { 00:06:31.157 "nbd_device": "/dev/nbd1", 00:06:31.157 "bdev_name": "Malloc1" 00:06:31.157 } 00:06:31.157 ]' 00:06:31.157 05:08:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:31.157 05:08:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:31.157 /dev/nbd1' 00:06:31.157 05:08:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:31.157 /dev/nbd1' 00:06:31.157 05:08:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:31.157 05:08:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:31.157 05:08:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:31.157 05:08:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:31.157 05:08:24 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:31.157 05:08:24 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:31.157 05:08:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.157 05:08:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:31.157 05:08:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:31.157 05:08:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:31.157 05:08:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:31.157 05:08:24 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:31.157 256+0 records in 00:06:31.157 256+0 records out 00:06:31.157 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00414659 s, 253 MB/s 00:06:31.157 05:08:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:31.157 05:08:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:31.420 256+0 records in 00:06:31.420 256+0 records out 00:06:31.420 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0179976 s, 58.3 MB/s 00:06:31.420 05:08:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:31.420 05:08:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:31.420 256+0 records in 00:06:31.420 256+0 records out 00:06:31.420 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0173775 s, 60.3 MB/s 00:06:31.420 05:08:24 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:31.420 05:08:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.420 05:08:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:31.420 05:08:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:31.421 05:08:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:31.421 05:08:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:31.421 05:08:24 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:31.421 05:08:24 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:31.421 05:08:24 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:31.421 05:08:24 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:31.421 05:08:24 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:31.421 05:08:24 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:31.421 05:08:24 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:31.421 05:08:24 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.421 05:08:24 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.421 05:08:24 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:31.421 05:08:24 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:31.421 05:08:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:31.421 05:08:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:31.421 05:08:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:31.421 05:08:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:31.421 05:08:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:31.421 05:08:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:31.421 05:08:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:31.421 05:08:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:31.421 05:08:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:31.421 05:08:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:31.421 05:08:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:31.421 05:08:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:31.698 05:08:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:31.698 05:08:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:31.698 05:08:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:31.698 05:08:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:31.698 05:08:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:31.698 05:08:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:31.698 05:08:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:31.698 05:08:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:31.698 05:08:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:31.698 05:08:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.698 05:08:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:31.958 05:08:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:31.958 05:08:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:31.958 05:08:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:31.958 05:08:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:31.958 05:08:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:31.958 05:08:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:31.958 05:08:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:31.958 05:08:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:31.958 05:08:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:31.958 05:08:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:31.958 05:08:25 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:31.958 05:08:25 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:31.958 05:08:25 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:32.218 05:08:25 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:32.218 [2024-11-10 05:08:25.393818] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:32.218 [2024-11-10 05:08:25.424028] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:32.218 [2024-11-10 05:08:25.424032] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.479 [2024-11-10 05:08:25.455176] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:32.480 [2024-11-10 05:08:25.455234] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:35.781 05:08:28 event.app_repeat -- event/event.sh@38 -- # waitforlisten 70982 /var/tmp/spdk-nbd.sock 00:06:35.781 05:08:28 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70982 ']' 00:06:35.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:35.781 05:08:28 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:35.781 05:08:28 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:35.781 05:08:28 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:35.781 05:08:28 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:35.781 05:08:28 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:35.781 05:08:28 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:35.781 05:08:28 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:35.781 05:08:28 event.app_repeat -- event/event.sh@39 -- # killprocess 70982 00:06:35.781 05:08:28 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 70982 ']' 00:06:35.781 05:08:28 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 70982 00:06:35.781 05:08:28 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:06:35.781 05:08:28 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:35.781 05:08:28 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70982 00:06:35.781 05:08:28 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:35.781 killing process with pid 70982 00:06:35.781 05:08:28 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:35.781 05:08:28 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70982' 00:06:35.781 05:08:28 event.app_repeat -- common/autotest_common.sh@969 -- # kill 70982 00:06:35.781 05:08:28 event.app_repeat -- common/autotest_common.sh@974 -- # wait 70982 00:06:35.781 spdk_app_start is called in Round 0. 00:06:35.781 Shutdown signal received, stop current app iteration 00:06:35.781 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 reinitialization... 00:06:35.781 spdk_app_start is called in Round 1. 00:06:35.781 Shutdown signal received, stop current app iteration 00:06:35.781 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 reinitialization... 00:06:35.781 spdk_app_start is called in Round 2. 00:06:35.781 Shutdown signal received, stop current app iteration 00:06:35.781 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 reinitialization... 00:06:35.781 spdk_app_start is called in Round 3. 00:06:35.781 Shutdown signal received, stop current app iteration 00:06:35.781 05:08:28 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:35.781 05:08:28 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:35.781 00:06:35.781 real 0m16.818s 00:06:35.781 user 0m37.468s 00:06:35.781 sys 0m2.035s 00:06:35.781 05:08:28 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:35.781 05:08:28 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:35.781 ************************************ 00:06:35.781 END TEST app_repeat 00:06:35.781 ************************************ 00:06:35.781 05:08:28 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:35.781 05:08:28 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:35.781 05:08:28 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:35.781 05:08:28 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:35.781 05:08:28 event -- common/autotest_common.sh@10 -- # set +x 00:06:35.781 ************************************ 00:06:35.781 START TEST cpu_locks 00:06:35.781 ************************************ 00:06:35.781 05:08:28 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:35.781 * Looking for test storage... 00:06:35.781 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:35.781 05:08:28 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:35.781 05:08:28 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:06:35.781 05:08:28 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:35.781 05:08:28 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:35.781 05:08:28 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:35.781 05:08:28 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:35.781 05:08:28 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:35.781 05:08:28 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:35.781 05:08:28 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:35.781 05:08:28 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:35.781 05:08:28 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:35.781 05:08:28 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:35.782 05:08:28 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:35.782 05:08:28 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:35.782 05:08:28 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:35.782 05:08:28 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:35.782 05:08:28 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:35.782 05:08:28 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:35.782 05:08:28 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:35.782 05:08:28 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:35.782 05:08:28 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:35.782 05:08:28 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:35.782 05:08:28 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:35.782 05:08:28 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:35.782 05:08:28 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:35.782 05:08:28 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:35.782 05:08:28 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:35.782 05:08:28 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:35.782 05:08:28 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:35.782 05:08:28 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:35.782 05:08:28 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:35.782 05:08:28 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:35.782 05:08:28 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:35.782 05:08:28 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:35.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.782 --rc genhtml_branch_coverage=1 00:06:35.782 --rc genhtml_function_coverage=1 00:06:35.782 --rc genhtml_legend=1 00:06:35.782 --rc geninfo_all_blocks=1 00:06:35.782 --rc geninfo_unexecuted_blocks=1 00:06:35.782 00:06:35.782 ' 00:06:35.782 05:08:28 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:35.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.782 --rc genhtml_branch_coverage=1 00:06:35.782 --rc genhtml_function_coverage=1 00:06:35.782 --rc genhtml_legend=1 00:06:35.782 --rc geninfo_all_blocks=1 00:06:35.782 --rc geninfo_unexecuted_blocks=1 00:06:35.782 00:06:35.782 ' 00:06:35.782 05:08:28 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:35.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.782 --rc genhtml_branch_coverage=1 00:06:35.782 --rc genhtml_function_coverage=1 00:06:35.782 --rc genhtml_legend=1 00:06:35.782 --rc geninfo_all_blocks=1 00:06:35.782 --rc geninfo_unexecuted_blocks=1 00:06:35.782 00:06:35.782 ' 00:06:35.782 05:08:28 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:35.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.782 --rc genhtml_branch_coverage=1 00:06:35.782 --rc genhtml_function_coverage=1 00:06:35.782 --rc genhtml_legend=1 00:06:35.782 --rc geninfo_all_blocks=1 00:06:35.782 --rc geninfo_unexecuted_blocks=1 00:06:35.782 00:06:35.782 ' 00:06:35.782 05:08:28 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:35.782 05:08:28 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:35.782 05:08:28 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:35.782 05:08:28 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:35.782 05:08:28 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:35.782 05:08:28 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:35.782 05:08:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:35.782 ************************************ 00:06:35.782 START TEST default_locks 00:06:35.782 ************************************ 00:06:35.782 05:08:28 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:06:35.782 05:08:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=71396 00:06:35.782 05:08:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 71396 00:06:35.782 05:08:28 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 71396 ']' 00:06:35.782 05:08:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:35.782 05:08:28 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.782 05:08:28 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:35.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.782 05:08:28 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.782 05:08:28 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:35.782 05:08:28 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:35.782 [2024-11-10 05:08:28.893783] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:35.782 [2024-11-10 05:08:28.893884] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71396 ] 00:06:36.043 [2024-11-10 05:08:29.037520] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.043 [2024-11-10 05:08:29.067017] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.614 05:08:29 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:36.614 05:08:29 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:06:36.614 05:08:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 71396 00:06:36.614 05:08:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 71396 00:06:36.614 05:08:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:36.875 05:08:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 71396 00:06:36.875 05:08:29 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 71396 ']' 00:06:36.875 05:08:29 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 71396 00:06:36.875 05:08:29 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:06:36.875 05:08:29 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:36.875 05:08:29 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71396 00:06:36.875 05:08:29 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:36.875 05:08:29 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:36.875 killing process with pid 71396 00:06:36.875 05:08:29 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71396' 00:06:36.875 05:08:29 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 71396 00:06:36.875 05:08:29 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 71396 00:06:37.160 05:08:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 71396 00:06:37.161 05:08:30 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:37.161 05:08:30 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 71396 00:06:37.161 05:08:30 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:37.161 05:08:30 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:37.161 05:08:30 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:37.161 05:08:30 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:37.161 05:08:30 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 71396 00:06:37.161 05:08:30 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 71396 ']' 00:06:37.161 05:08:30 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.161 05:08:30 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:37.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.161 05:08:30 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.161 05:08:30 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:37.161 05:08:30 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:37.161 ERROR: process (pid: 71396) is no longer running 00:06:37.161 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (71396) - No such process 00:06:37.161 05:08:30 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:37.161 05:08:30 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:06:37.161 05:08:30 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:37.161 05:08:30 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:37.161 05:08:30 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:37.161 05:08:30 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:37.161 05:08:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:37.161 05:08:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:37.161 05:08:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:37.161 05:08:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:37.161 00:06:37.161 real 0m1.400s 00:06:37.161 user 0m1.451s 00:06:37.161 sys 0m0.408s 00:06:37.161 05:08:30 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:37.161 ************************************ 00:06:37.161 END TEST default_locks 00:06:37.161 05:08:30 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:37.161 ************************************ 00:06:37.161 05:08:30 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:37.161 05:08:30 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:37.161 05:08:30 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:37.161 05:08:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:37.161 ************************************ 00:06:37.161 START TEST default_locks_via_rpc 00:06:37.161 ************************************ 00:06:37.161 05:08:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:06:37.161 05:08:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=71449 00:06:37.161 05:08:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 71449 00:06:37.161 05:08:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71449 ']' 00:06:37.161 05:08:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.161 05:08:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:37.161 05:08:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.161 05:08:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:37.161 05:08:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:37.161 05:08:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:37.161 [2024-11-10 05:08:30.343553] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:37.161 [2024-11-10 05:08:30.343673] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71449 ] 00:06:37.422 [2024-11-10 05:08:30.482821] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.422 [2024-11-10 05:08:30.514628] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.993 05:08:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:37.993 05:08:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:37.993 05:08:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:37.993 05:08:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.993 05:08:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:37.993 05:08:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.993 05:08:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:37.993 05:08:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:37.993 05:08:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:37.993 05:08:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:37.993 05:08:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:37.993 05:08:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.993 05:08:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:37.993 05:08:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.993 05:08:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 71449 00:06:37.993 05:08:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 71449 00:06:37.993 05:08:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:38.255 05:08:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 71449 00:06:38.255 05:08:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 71449 ']' 00:06:38.255 05:08:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 71449 00:06:38.255 05:08:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:06:38.255 05:08:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:38.255 05:08:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71449 00:06:38.255 05:08:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:38.255 05:08:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:38.255 killing process with pid 71449 00:06:38.255 05:08:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71449' 00:06:38.255 05:08:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 71449 00:06:38.255 05:08:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 71449 00:06:38.518 00:06:38.518 real 0m1.446s 00:06:38.518 user 0m1.495s 00:06:38.518 sys 0m0.425s 00:06:38.518 05:08:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:38.518 ************************************ 00:06:38.518 END TEST default_locks_via_rpc 00:06:38.518 ************************************ 00:06:38.518 05:08:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.780 05:08:31 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:38.780 05:08:31 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:38.780 05:08:31 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:38.780 05:08:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:38.780 ************************************ 00:06:38.780 START TEST non_locking_app_on_locked_coremask 00:06:38.780 ************************************ 00:06:38.780 05:08:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:06:38.780 05:08:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=71490 00:06:38.780 05:08:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 71490 /var/tmp/spdk.sock 00:06:38.780 05:08:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71490 ']' 00:06:38.780 05:08:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.780 05:08:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:38.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.780 05:08:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.780 05:08:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:38.780 05:08:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:38.780 05:08:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:38.780 [2024-11-10 05:08:31.833084] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:38.780 [2024-11-10 05:08:31.833204] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71490 ] 00:06:38.780 [2024-11-10 05:08:31.980215] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.040 [2024-11-10 05:08:32.013191] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.612 05:08:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:39.612 05:08:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:39.612 05:08:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:39.612 05:08:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=71506 00:06:39.612 05:08:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 71506 /var/tmp/spdk2.sock 00:06:39.612 05:08:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71506 ']' 00:06:39.612 05:08:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:39.612 05:08:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:39.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:39.612 05:08:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:39.612 05:08:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:39.612 05:08:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:39.612 [2024-11-10 05:08:32.724609] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:39.612 [2024-11-10 05:08:32.724984] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71506 ] 00:06:39.872 [2024-11-10 05:08:32.876818] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:39.872 [2024-11-10 05:08:32.876870] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.872 [2024-11-10 05:08:32.940163] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.442 05:08:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:40.442 05:08:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:40.442 05:08:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 71490 00:06:40.442 05:08:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:40.442 05:08:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71490 00:06:40.702 05:08:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 71490 00:06:40.702 05:08:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71490 ']' 00:06:40.702 05:08:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 71490 00:06:40.702 05:08:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:40.702 05:08:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:40.702 05:08:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71490 00:06:40.702 05:08:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:40.702 killing process with pid 71490 00:06:40.702 05:08:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:40.702 05:08:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71490' 00:06:40.702 05:08:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 71490 00:06:40.702 05:08:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 71490 00:06:41.271 05:08:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 71506 00:06:41.271 05:08:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71506 ']' 00:06:41.271 05:08:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 71506 00:06:41.271 05:08:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:41.271 05:08:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:41.271 05:08:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71506 00:06:41.271 05:08:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:41.271 05:08:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:41.271 05:08:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71506' 00:06:41.271 killing process with pid 71506 00:06:41.271 05:08:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 71506 00:06:41.271 05:08:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 71506 00:06:41.533 00:06:41.533 real 0m2.849s 00:06:41.533 user 0m3.170s 00:06:41.533 sys 0m0.749s 00:06:41.533 05:08:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:41.533 05:08:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:41.533 ************************************ 00:06:41.533 END TEST non_locking_app_on_locked_coremask 00:06:41.533 ************************************ 00:06:41.533 05:08:34 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:41.533 05:08:34 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:41.533 05:08:34 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:41.533 05:08:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:41.533 ************************************ 00:06:41.533 START TEST locking_app_on_unlocked_coremask 00:06:41.533 ************************************ 00:06:41.533 05:08:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:06:41.533 05:08:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=71564 00:06:41.533 05:08:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 71564 /var/tmp/spdk.sock 00:06:41.533 05:08:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71564 ']' 00:06:41.533 05:08:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.533 05:08:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:41.533 05:08:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:41.533 05:08:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.533 05:08:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:41.533 05:08:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:41.533 [2024-11-10 05:08:34.710299] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:41.533 [2024-11-10 05:08:34.710395] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71564 ] 00:06:41.794 [2024-11-10 05:08:34.849068] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:41.794 [2024-11-10 05:08:34.849106] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.794 [2024-11-10 05:08:34.877723] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.366 05:08:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:42.366 05:08:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:42.366 05:08:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=71580 00:06:42.366 05:08:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 71580 /var/tmp/spdk2.sock 00:06:42.366 05:08:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71580 ']' 00:06:42.366 05:08:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:42.366 05:08:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:42.366 05:08:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:42.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:42.366 05:08:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:42.366 05:08:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:42.366 05:08:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:42.627 [2024-11-10 05:08:35.621018] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:42.627 [2024-11-10 05:08:35.621164] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71580 ] 00:06:42.627 [2024-11-10 05:08:35.769192] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.627 [2024-11-10 05:08:35.831086] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.570 05:08:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:43.570 05:08:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:43.570 05:08:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 71580 00:06:43.570 05:08:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:43.570 05:08:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71580 00:06:43.570 05:08:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 71564 00:06:43.570 05:08:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71564 ']' 00:06:43.570 05:08:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 71564 00:06:43.570 05:08:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:43.570 05:08:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:43.570 05:08:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71564 00:06:43.570 05:08:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:43.570 killing process with pid 71564 00:06:43.570 05:08:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:43.570 05:08:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71564' 00:06:43.570 05:08:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 71564 00:06:43.570 05:08:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 71564 00:06:44.143 05:08:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 71580 00:06:44.143 05:08:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71580 ']' 00:06:44.143 05:08:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 71580 00:06:44.143 05:08:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:44.143 05:08:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:44.143 05:08:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71580 00:06:44.143 05:08:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:44.143 killing process with pid 71580 00:06:44.143 05:08:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:44.143 05:08:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71580' 00:06:44.143 05:08:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 71580 00:06:44.143 05:08:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 71580 00:06:44.405 00:06:44.405 real 0m2.854s 00:06:44.405 user 0m3.179s 00:06:44.405 sys 0m0.761s 00:06:44.405 05:08:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:44.405 ************************************ 00:06:44.405 END TEST locking_app_on_unlocked_coremask 00:06:44.405 ************************************ 00:06:44.405 05:08:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:44.405 05:08:37 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:44.405 05:08:37 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:44.405 05:08:37 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:44.405 05:08:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:44.405 ************************************ 00:06:44.405 START TEST locking_app_on_locked_coremask 00:06:44.405 ************************************ 00:06:44.405 05:08:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:06:44.405 05:08:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=71638 00:06:44.405 05:08:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 71638 /var/tmp/spdk.sock 00:06:44.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.405 05:08:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71638 ']' 00:06:44.405 05:08:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.405 05:08:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:44.405 05:08:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:44.405 05:08:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.405 05:08:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:44.405 05:08:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:44.405 [2024-11-10 05:08:37.620005] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:44.405 [2024-11-10 05:08:37.620098] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71638 ] 00:06:44.666 [2024-11-10 05:08:37.755325] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.666 [2024-11-10 05:08:37.785580] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.236 05:08:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:45.236 05:08:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:45.236 05:08:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=71654 00:06:45.236 05:08:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 71654 /var/tmp/spdk2.sock 00:06:45.236 05:08:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:45.236 05:08:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 71654 /var/tmp/spdk2.sock 00:06:45.236 05:08:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:45.236 05:08:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:45.236 05:08:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:45.236 05:08:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:45.236 05:08:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:45.236 05:08:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 71654 /var/tmp/spdk2.sock 00:06:45.236 05:08:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71654 ']' 00:06:45.236 05:08:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:45.236 05:08:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:45.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:45.236 05:08:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:45.236 05:08:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:45.236 05:08:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:45.496 [2024-11-10 05:08:38.538403] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:45.496 [2024-11-10 05:08:38.538524] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71654 ] 00:06:45.496 [2024-11-10 05:08:38.686153] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 71638 has claimed it. 00:06:45.496 [2024-11-10 05:08:38.686209] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:46.067 ERROR: process (pid: 71654) is no longer running 00:06:46.067 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (71654) - No such process 00:06:46.067 05:08:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:46.067 05:08:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:46.067 05:08:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:46.067 05:08:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:46.067 05:08:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:46.067 05:08:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:46.067 05:08:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 71638 00:06:46.067 05:08:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71638 00:06:46.067 05:08:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:46.327 05:08:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 71638 00:06:46.327 05:08:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71638 ']' 00:06:46.327 05:08:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 71638 00:06:46.327 05:08:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:46.327 05:08:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:46.327 05:08:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71638 00:06:46.327 05:08:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:46.327 05:08:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:46.327 05:08:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71638' 00:06:46.327 killing process with pid 71638 00:06:46.327 05:08:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 71638 00:06:46.327 05:08:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 71638 00:06:46.594 00:06:46.594 real 0m2.126s 00:06:46.594 user 0m2.399s 00:06:46.594 sys 0m0.512s 00:06:46.594 05:08:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:46.594 ************************************ 00:06:46.594 END TEST locking_app_on_locked_coremask 00:06:46.594 05:08:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:46.594 ************************************ 00:06:46.594 05:08:39 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:46.594 05:08:39 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:46.594 05:08:39 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:46.594 05:08:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:46.594 ************************************ 00:06:46.594 START TEST locking_overlapped_coremask 00:06:46.594 ************************************ 00:06:46.594 05:08:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:06:46.594 05:08:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=71696 00:06:46.594 05:08:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 71696 /var/tmp/spdk.sock 00:06:46.594 05:08:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 71696 ']' 00:06:46.594 05:08:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.594 05:08:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:46.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.594 05:08:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.594 05:08:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:46.594 05:08:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:46.594 05:08:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:46.594 [2024-11-10 05:08:39.795040] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:46.594 [2024-11-10 05:08:39.795138] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71696 ] 00:06:46.857 [2024-11-10 05:08:39.937214] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:46.857 [2024-11-10 05:08:39.968315] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:46.857 [2024-11-10 05:08:39.968469] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.857 [2024-11-10 05:08:39.968503] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:47.461 05:08:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:47.461 05:08:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:47.461 05:08:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=71714 00:06:47.461 05:08:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 71714 /var/tmp/spdk2.sock 00:06:47.461 05:08:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:47.461 05:08:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:47.461 05:08:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 71714 /var/tmp/spdk2.sock 00:06:47.461 05:08:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:47.461 05:08:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:47.461 05:08:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:47.461 05:08:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:47.461 05:08:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 71714 /var/tmp/spdk2.sock 00:06:47.461 05:08:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 71714 ']' 00:06:47.461 05:08:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:47.461 05:08:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:47.461 05:08:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:47.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:47.461 05:08:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:47.461 05:08:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:47.721 [2024-11-10 05:08:40.727003] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:47.721 [2024-11-10 05:08:40.727121] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71714 ] 00:06:47.721 [2024-11-10 05:08:40.880717] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71696 has claimed it. 00:06:47.721 [2024-11-10 05:08:40.880799] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:48.294 ERROR: process (pid: 71714) is no longer running 00:06:48.294 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (71714) - No such process 00:06:48.294 05:08:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:48.294 05:08:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:48.294 05:08:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:48.294 05:08:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:48.294 05:08:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:48.294 05:08:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:48.294 05:08:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:48.294 05:08:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:48.294 05:08:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:48.294 05:08:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:48.294 05:08:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 71696 00:06:48.294 05:08:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 71696 ']' 00:06:48.294 05:08:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 71696 00:06:48.294 05:08:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:06:48.294 05:08:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:48.294 05:08:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71696 00:06:48.294 05:08:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:48.294 05:08:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:48.294 killing process with pid 71696 00:06:48.295 05:08:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71696' 00:06:48.295 05:08:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 71696 00:06:48.295 05:08:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 71696 00:06:48.555 00:06:48.555 real 0m1.947s 00:06:48.555 user 0m5.404s 00:06:48.555 sys 0m0.402s 00:06:48.555 05:08:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:48.555 ************************************ 00:06:48.555 END TEST locking_overlapped_coremask 00:06:48.555 ************************************ 00:06:48.555 05:08:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:48.556 05:08:41 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:48.556 05:08:41 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:48.556 05:08:41 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:48.556 05:08:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:48.556 ************************************ 00:06:48.556 START TEST locking_overlapped_coremask_via_rpc 00:06:48.556 ************************************ 00:06:48.556 05:08:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:06:48.556 05:08:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=71756 00:06:48.556 05:08:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 71756 /var/tmp/spdk.sock 00:06:48.556 05:08:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71756 ']' 00:06:48.556 05:08:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.556 05:08:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:48.556 05:08:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.556 05:08:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:48.556 05:08:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:48.556 05:08:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.817 [2024-11-10 05:08:41.820872] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:48.817 [2024-11-10 05:08:41.821044] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71756 ] 00:06:48.817 [2024-11-10 05:08:41.972030] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:48.817 [2024-11-10 05:08:41.972102] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:48.817 [2024-11-10 05:08:42.024863] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:48.817 [2024-11-10 05:08:42.025248] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:48.817 [2024-11-10 05:08:42.025287] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.759 05:08:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:49.759 05:08:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:49.759 05:08:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=71774 00:06:49.759 05:08:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 71774 /var/tmp/spdk2.sock 00:06:49.760 05:08:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71774 ']' 00:06:49.760 05:08:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:49.760 05:08:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:49.760 05:08:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:49.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:49.760 05:08:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:49.760 05:08:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:49.760 05:08:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.760 [2024-11-10 05:08:42.716155] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:49.760 [2024-11-10 05:08:42.716276] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71774 ] 00:06:49.760 [2024-11-10 05:08:42.862464] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:49.760 [2024-11-10 05:08:42.862506] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:49.760 [2024-11-10 05:08:42.919867] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:49.760 [2024-11-10 05:08:42.923060] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:49.760 [2024-11-10 05:08:42.923114] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:06:50.331 05:08:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:50.331 05:08:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:50.331 05:08:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:50.331 05:08:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.331 05:08:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.592 05:08:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.592 05:08:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:50.592 05:08:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:50.592 05:08:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:50.592 05:08:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:50.592 05:08:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:50.592 05:08:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:50.592 05:08:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:50.592 05:08:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:50.592 05:08:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.592 05:08:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.592 [2024-11-10 05:08:43.574140] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71756 has claimed it. 00:06:50.592 request: 00:06:50.592 { 00:06:50.592 "method": "framework_enable_cpumask_locks", 00:06:50.592 "req_id": 1 00:06:50.592 } 00:06:50.592 Got JSON-RPC error response 00:06:50.592 response: 00:06:50.592 { 00:06:50.592 "code": -32603, 00:06:50.592 "message": "Failed to claim CPU core: 2" 00:06:50.592 } 00:06:50.592 05:08:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:50.592 05:08:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:50.592 05:08:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:50.592 05:08:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:50.592 05:08:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:50.592 05:08:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 71756 /var/tmp/spdk.sock 00:06:50.592 05:08:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71756 ']' 00:06:50.592 05:08:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.592 05:08:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:50.592 05:08:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.592 05:08:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:50.592 05:08:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.592 05:08:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:50.592 05:08:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:50.592 05:08:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 71774 /var/tmp/spdk2.sock 00:06:50.592 05:08:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71774 ']' 00:06:50.592 05:08:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:50.592 05:08:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:50.592 05:08:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:50.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:50.592 05:08:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:50.592 05:08:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.853 05:08:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:50.853 05:08:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:50.853 05:08:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:50.853 05:08:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:50.853 05:08:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:50.853 05:08:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:50.853 00:06:50.853 real 0m2.259s 00:06:50.853 user 0m1.063s 00:06:50.853 sys 0m0.130s 00:06:50.853 05:08:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:50.853 05:08:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.853 ************************************ 00:06:50.853 END TEST locking_overlapped_coremask_via_rpc 00:06:50.853 ************************************ 00:06:50.853 05:08:44 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:50.853 05:08:44 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71756 ]] 00:06:50.853 05:08:44 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71756 00:06:50.853 05:08:44 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71756 ']' 00:06:50.853 05:08:44 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71756 00:06:50.853 05:08:44 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:50.853 05:08:44 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:50.853 05:08:44 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71756 00:06:50.853 05:08:44 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:50.853 05:08:44 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:50.853 05:08:44 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71756' 00:06:50.853 killing process with pid 71756 00:06:50.853 05:08:44 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 71756 00:06:50.853 05:08:44 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 71756 00:06:51.114 05:08:44 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71774 ]] 00:06:51.114 05:08:44 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71774 00:06:51.114 05:08:44 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71774 ']' 00:06:51.114 05:08:44 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71774 00:06:51.114 05:08:44 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:51.114 05:08:44 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:51.114 05:08:44 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71774 00:06:51.373 05:08:44 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:51.373 killing process with pid 71774 00:06:51.373 05:08:44 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:51.373 05:08:44 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71774' 00:06:51.373 05:08:44 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 71774 00:06:51.373 05:08:44 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 71774 00:06:51.373 05:08:44 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:51.373 05:08:44 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:51.373 05:08:44 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71756 ]] 00:06:51.374 05:08:44 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71756 00:06:51.374 05:08:44 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71756 ']' 00:06:51.374 05:08:44 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71756 00:06:51.374 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (71756) - No such process 00:06:51.374 Process with pid 71756 is not found 00:06:51.374 05:08:44 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 71756 is not found' 00:06:51.374 05:08:44 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71774 ]] 00:06:51.374 05:08:44 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71774 00:06:51.374 05:08:44 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71774 ']' 00:06:51.374 Process with pid 71774 is not found 00:06:51.374 05:08:44 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71774 00:06:51.374 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (71774) - No such process 00:06:51.374 05:08:44 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 71774 is not found' 00:06:51.374 05:08:44 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:51.374 00:06:51.374 real 0m15.901s 00:06:51.374 user 0m28.354s 00:06:51.374 sys 0m4.125s 00:06:51.374 05:08:44 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:51.374 05:08:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:51.374 ************************************ 00:06:51.374 END TEST cpu_locks 00:06:51.374 ************************************ 00:06:51.634 00:06:51.634 real 0m41.269s 00:06:51.634 user 1m20.209s 00:06:51.634 sys 0m6.948s 00:06:51.634 05:08:44 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:51.634 05:08:44 event -- common/autotest_common.sh@10 -- # set +x 00:06:51.634 ************************************ 00:06:51.634 END TEST event 00:06:51.634 ************************************ 00:06:51.634 05:08:44 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:51.634 05:08:44 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:51.634 05:08:44 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:51.634 05:08:44 -- common/autotest_common.sh@10 -- # set +x 00:06:51.634 ************************************ 00:06:51.634 START TEST thread 00:06:51.634 ************************************ 00:06:51.634 05:08:44 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:51.634 * Looking for test storage... 00:06:51.634 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:51.634 05:08:44 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:51.635 05:08:44 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:06:51.635 05:08:44 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:51.635 05:08:44 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:51.635 05:08:44 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:51.635 05:08:44 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:51.635 05:08:44 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:51.635 05:08:44 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:51.635 05:08:44 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:51.635 05:08:44 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:51.635 05:08:44 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:51.635 05:08:44 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:51.635 05:08:44 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:51.635 05:08:44 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:51.635 05:08:44 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:51.635 05:08:44 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:51.635 05:08:44 thread -- scripts/common.sh@345 -- # : 1 00:06:51.635 05:08:44 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:51.635 05:08:44 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:51.635 05:08:44 thread -- scripts/common.sh@365 -- # decimal 1 00:06:51.635 05:08:44 thread -- scripts/common.sh@353 -- # local d=1 00:06:51.635 05:08:44 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:51.635 05:08:44 thread -- scripts/common.sh@355 -- # echo 1 00:06:51.635 05:08:44 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:51.635 05:08:44 thread -- scripts/common.sh@366 -- # decimal 2 00:06:51.635 05:08:44 thread -- scripts/common.sh@353 -- # local d=2 00:06:51.635 05:08:44 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:51.635 05:08:44 thread -- scripts/common.sh@355 -- # echo 2 00:06:51.635 05:08:44 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:51.635 05:08:44 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:51.635 05:08:44 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:51.635 05:08:44 thread -- scripts/common.sh@368 -- # return 0 00:06:51.635 05:08:44 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:51.635 05:08:44 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:51.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.635 --rc genhtml_branch_coverage=1 00:06:51.635 --rc genhtml_function_coverage=1 00:06:51.635 --rc genhtml_legend=1 00:06:51.635 --rc geninfo_all_blocks=1 00:06:51.635 --rc geninfo_unexecuted_blocks=1 00:06:51.635 00:06:51.635 ' 00:06:51.635 05:08:44 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:51.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.635 --rc genhtml_branch_coverage=1 00:06:51.635 --rc genhtml_function_coverage=1 00:06:51.635 --rc genhtml_legend=1 00:06:51.635 --rc geninfo_all_blocks=1 00:06:51.635 --rc geninfo_unexecuted_blocks=1 00:06:51.635 00:06:51.635 ' 00:06:51.635 05:08:44 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:51.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.635 --rc genhtml_branch_coverage=1 00:06:51.635 --rc genhtml_function_coverage=1 00:06:51.635 --rc genhtml_legend=1 00:06:51.635 --rc geninfo_all_blocks=1 00:06:51.635 --rc geninfo_unexecuted_blocks=1 00:06:51.635 00:06:51.635 ' 00:06:51.635 05:08:44 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:51.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.635 --rc genhtml_branch_coverage=1 00:06:51.635 --rc genhtml_function_coverage=1 00:06:51.635 --rc genhtml_legend=1 00:06:51.635 --rc geninfo_all_blocks=1 00:06:51.635 --rc geninfo_unexecuted_blocks=1 00:06:51.635 00:06:51.635 ' 00:06:51.635 05:08:44 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:51.635 05:08:44 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:51.635 05:08:44 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:51.635 05:08:44 thread -- common/autotest_common.sh@10 -- # set +x 00:06:51.635 ************************************ 00:06:51.635 START TEST thread_poller_perf 00:06:51.635 ************************************ 00:06:51.635 05:08:44 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:51.635 [2024-11-10 05:08:44.837877] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:51.635 [2024-11-10 05:08:44.838328] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71901 ] 00:06:51.895 [2024-11-10 05:08:44.978085] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.895 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:51.895 [2024-11-10 05:08:45.027619] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.270 [2024-11-10T05:08:46.506Z] ====================================== 00:06:53.270 [2024-11-10T05:08:46.506Z] busy:2609722386 (cyc) 00:06:53.270 [2024-11-10T05:08:46.506Z] total_run_count: 306000 00:06:53.270 [2024-11-10T05:08:46.506Z] tsc_hz: 2600000000 (cyc) 00:06:53.270 [2024-11-10T05:08:46.506Z] ====================================== 00:06:53.270 [2024-11-10T05:08:46.506Z] poller_cost: 8528 (cyc), 3280 (nsec) 00:06:53.270 00:06:53.270 real 0m1.285s 00:06:53.270 user 0m1.098s 00:06:53.270 sys 0m0.079s 00:06:53.270 05:08:46 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:53.270 05:08:46 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:53.270 ************************************ 00:06:53.270 END TEST thread_poller_perf 00:06:53.270 ************************************ 00:06:53.270 05:08:46 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:53.270 05:08:46 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:53.270 05:08:46 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:53.270 05:08:46 thread -- common/autotest_common.sh@10 -- # set +x 00:06:53.270 ************************************ 00:06:53.270 START TEST thread_poller_perf 00:06:53.270 ************************************ 00:06:53.270 05:08:46 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:53.270 [2024-11-10 05:08:46.168251] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:53.270 [2024-11-10 05:08:46.168346] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71932 ] 00:06:53.270 [2024-11-10 05:08:46.310567] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.270 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:53.270 [2024-11-10 05:08:46.344216] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.238 [2024-11-10T05:08:47.474Z] ====================================== 00:06:54.238 [2024-11-10T05:08:47.474Z] busy:2603471758 (cyc) 00:06:54.238 [2024-11-10T05:08:47.474Z] total_run_count: 3974000 00:06:54.238 [2024-11-10T05:08:47.474Z] tsc_hz: 2600000000 (cyc) 00:06:54.238 [2024-11-10T05:08:47.474Z] ====================================== 00:06:54.238 [2024-11-10T05:08:47.474Z] poller_cost: 655 (cyc), 251 (nsec) 00:06:54.238 00:06:54.238 real 0m1.258s 00:06:54.238 user 0m1.085s 00:06:54.238 sys 0m0.066s 00:06:54.238 05:08:47 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:54.238 ************************************ 00:06:54.238 END TEST thread_poller_perf 00:06:54.238 05:08:47 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:54.238 ************************************ 00:06:54.238 05:08:47 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:54.238 00:06:54.238 real 0m2.774s 00:06:54.238 user 0m2.302s 00:06:54.238 sys 0m0.257s 00:06:54.238 05:08:47 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:54.238 05:08:47 thread -- common/autotest_common.sh@10 -- # set +x 00:06:54.238 ************************************ 00:06:54.238 END TEST thread 00:06:54.238 ************************************ 00:06:54.238 05:08:47 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:54.238 05:08:47 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:54.238 05:08:47 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:54.238 05:08:47 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:54.238 05:08:47 -- common/autotest_common.sh@10 -- # set +x 00:06:54.497 ************************************ 00:06:54.497 START TEST app_cmdline 00:06:54.497 ************************************ 00:06:54.497 05:08:47 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:54.497 * Looking for test storage... 00:06:54.497 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:54.497 05:08:47 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:54.497 05:08:47 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:06:54.497 05:08:47 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:54.497 05:08:47 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:54.497 05:08:47 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:54.497 05:08:47 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:54.497 05:08:47 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:54.497 05:08:47 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:54.497 05:08:47 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:54.497 05:08:47 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:54.497 05:08:47 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:54.497 05:08:47 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:54.497 05:08:47 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:54.497 05:08:47 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:54.497 05:08:47 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:54.497 05:08:47 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:54.497 05:08:47 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:54.497 05:08:47 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:54.497 05:08:47 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:54.497 05:08:47 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:54.497 05:08:47 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:54.497 05:08:47 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:54.497 05:08:47 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:54.497 05:08:47 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:54.497 05:08:47 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:54.497 05:08:47 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:54.497 05:08:47 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:54.497 05:08:47 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:54.497 05:08:47 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:54.497 05:08:47 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:54.497 05:08:47 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:54.497 05:08:47 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:54.497 05:08:47 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:54.497 05:08:47 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:54.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.497 --rc genhtml_branch_coverage=1 00:06:54.497 --rc genhtml_function_coverage=1 00:06:54.497 --rc genhtml_legend=1 00:06:54.497 --rc geninfo_all_blocks=1 00:06:54.497 --rc geninfo_unexecuted_blocks=1 00:06:54.497 00:06:54.497 ' 00:06:54.497 05:08:47 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:54.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.497 --rc genhtml_branch_coverage=1 00:06:54.497 --rc genhtml_function_coverage=1 00:06:54.497 --rc genhtml_legend=1 00:06:54.497 --rc geninfo_all_blocks=1 00:06:54.497 --rc geninfo_unexecuted_blocks=1 00:06:54.497 00:06:54.497 ' 00:06:54.497 05:08:47 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:54.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.497 --rc genhtml_branch_coverage=1 00:06:54.497 --rc genhtml_function_coverage=1 00:06:54.497 --rc genhtml_legend=1 00:06:54.497 --rc geninfo_all_blocks=1 00:06:54.497 --rc geninfo_unexecuted_blocks=1 00:06:54.497 00:06:54.497 ' 00:06:54.497 05:08:47 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:54.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.497 --rc genhtml_branch_coverage=1 00:06:54.497 --rc genhtml_function_coverage=1 00:06:54.497 --rc genhtml_legend=1 00:06:54.497 --rc geninfo_all_blocks=1 00:06:54.497 --rc geninfo_unexecuted_blocks=1 00:06:54.497 00:06:54.497 ' 00:06:54.497 05:08:47 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:54.497 05:08:47 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=72016 00:06:54.497 05:08:47 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 72016 00:06:54.497 05:08:47 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 72016 ']' 00:06:54.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.497 05:08:47 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.497 05:08:47 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:54.497 05:08:47 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.497 05:08:47 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:54.497 05:08:47 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:54.497 05:08:47 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:54.497 [2024-11-10 05:08:47.668865] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:54.497 [2024-11-10 05:08:47.668981] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72016 ] 00:06:54.756 [2024-11-10 05:08:47.808573] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.756 [2024-11-10 05:08:47.842598] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.322 05:08:48 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:55.322 05:08:48 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:06:55.322 05:08:48 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:55.581 { 00:06:55.581 "version": "SPDK v24.09.1-pre git sha1 b18e1bd62", 00:06:55.581 "fields": { 00:06:55.581 "major": 24, 00:06:55.581 "minor": 9, 00:06:55.581 "patch": 1, 00:06:55.581 "suffix": "-pre", 00:06:55.581 "commit": "b18e1bd62" 00:06:55.581 } 00:06:55.581 } 00:06:55.581 05:08:48 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:55.581 05:08:48 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:55.581 05:08:48 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:55.581 05:08:48 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:55.581 05:08:48 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:55.581 05:08:48 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:55.581 05:08:48 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.581 05:08:48 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:55.581 05:08:48 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:55.581 05:08:48 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.581 05:08:48 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:55.581 05:08:48 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:55.581 05:08:48 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:55.581 05:08:48 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:55.581 05:08:48 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:55.581 05:08:48 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:55.581 05:08:48 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:55.581 05:08:48 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:55.581 05:08:48 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:55.581 05:08:48 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:55.581 05:08:48 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:55.581 05:08:48 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:55.581 05:08:48 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:55.581 05:08:48 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:55.840 request: 00:06:55.840 { 00:06:55.840 "method": "env_dpdk_get_mem_stats", 00:06:55.840 "req_id": 1 00:06:55.840 } 00:06:55.840 Got JSON-RPC error response 00:06:55.840 response: 00:06:55.840 { 00:06:55.840 "code": -32601, 00:06:55.840 "message": "Method not found" 00:06:55.840 } 00:06:55.840 05:08:48 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:55.840 05:08:48 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:55.840 05:08:48 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:55.840 05:08:48 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:55.840 05:08:48 app_cmdline -- app/cmdline.sh@1 -- # killprocess 72016 00:06:55.840 05:08:48 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 72016 ']' 00:06:55.840 05:08:48 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 72016 00:06:55.840 05:08:48 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:06:55.840 05:08:48 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:55.840 05:08:48 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72016 00:06:55.840 05:08:48 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:55.840 05:08:48 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:55.840 killing process with pid 72016 00:06:55.840 05:08:48 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72016' 00:06:55.840 05:08:48 app_cmdline -- common/autotest_common.sh@969 -- # kill 72016 00:06:55.840 05:08:48 app_cmdline -- common/autotest_common.sh@974 -- # wait 72016 00:06:56.098 00:06:56.098 real 0m1.741s 00:06:56.098 user 0m2.086s 00:06:56.098 sys 0m0.387s 00:06:56.098 ************************************ 00:06:56.098 END TEST app_cmdline 00:06:56.098 ************************************ 00:06:56.098 05:08:49 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:56.098 05:08:49 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:56.098 05:08:49 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:56.098 05:08:49 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:56.098 05:08:49 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:56.098 05:08:49 -- common/autotest_common.sh@10 -- # set +x 00:06:56.098 ************************************ 00:06:56.098 START TEST version 00:06:56.098 ************************************ 00:06:56.098 05:08:49 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:56.098 * Looking for test storage... 00:06:56.098 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:56.098 05:08:49 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:56.098 05:08:49 version -- common/autotest_common.sh@1681 -- # lcov --version 00:06:56.098 05:08:49 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:56.358 05:08:49 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:56.358 05:08:49 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:56.358 05:08:49 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:56.358 05:08:49 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:56.358 05:08:49 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:56.358 05:08:49 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:56.358 05:08:49 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:56.358 05:08:49 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:56.358 05:08:49 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:56.358 05:08:49 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:56.358 05:08:49 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:56.358 05:08:49 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:56.358 05:08:49 version -- scripts/common.sh@344 -- # case "$op" in 00:06:56.358 05:08:49 version -- scripts/common.sh@345 -- # : 1 00:06:56.358 05:08:49 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:56.358 05:08:49 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:56.358 05:08:49 version -- scripts/common.sh@365 -- # decimal 1 00:06:56.358 05:08:49 version -- scripts/common.sh@353 -- # local d=1 00:06:56.358 05:08:49 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:56.358 05:08:49 version -- scripts/common.sh@355 -- # echo 1 00:06:56.358 05:08:49 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:56.358 05:08:49 version -- scripts/common.sh@366 -- # decimal 2 00:06:56.358 05:08:49 version -- scripts/common.sh@353 -- # local d=2 00:06:56.358 05:08:49 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:56.358 05:08:49 version -- scripts/common.sh@355 -- # echo 2 00:06:56.358 05:08:49 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:56.358 05:08:49 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:56.358 05:08:49 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:56.358 05:08:49 version -- scripts/common.sh@368 -- # return 0 00:06:56.358 05:08:49 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:56.358 05:08:49 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:56.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.358 --rc genhtml_branch_coverage=1 00:06:56.358 --rc genhtml_function_coverage=1 00:06:56.358 --rc genhtml_legend=1 00:06:56.358 --rc geninfo_all_blocks=1 00:06:56.358 --rc geninfo_unexecuted_blocks=1 00:06:56.358 00:06:56.358 ' 00:06:56.358 05:08:49 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:56.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.358 --rc genhtml_branch_coverage=1 00:06:56.358 --rc genhtml_function_coverage=1 00:06:56.358 --rc genhtml_legend=1 00:06:56.358 --rc geninfo_all_blocks=1 00:06:56.358 --rc geninfo_unexecuted_blocks=1 00:06:56.358 00:06:56.358 ' 00:06:56.358 05:08:49 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:56.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.358 --rc genhtml_branch_coverage=1 00:06:56.358 --rc genhtml_function_coverage=1 00:06:56.358 --rc genhtml_legend=1 00:06:56.358 --rc geninfo_all_blocks=1 00:06:56.358 --rc geninfo_unexecuted_blocks=1 00:06:56.358 00:06:56.358 ' 00:06:56.358 05:08:49 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:56.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.358 --rc genhtml_branch_coverage=1 00:06:56.358 --rc genhtml_function_coverage=1 00:06:56.358 --rc genhtml_legend=1 00:06:56.358 --rc geninfo_all_blocks=1 00:06:56.358 --rc geninfo_unexecuted_blocks=1 00:06:56.358 00:06:56.358 ' 00:06:56.358 05:08:49 version -- app/version.sh@17 -- # get_header_version major 00:06:56.358 05:08:49 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:56.358 05:08:49 version -- app/version.sh@14 -- # cut -f2 00:06:56.358 05:08:49 version -- app/version.sh@14 -- # tr -d '"' 00:06:56.358 05:08:49 version -- app/version.sh@17 -- # major=24 00:06:56.358 05:08:49 version -- app/version.sh@18 -- # get_header_version minor 00:06:56.358 05:08:49 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:56.358 05:08:49 version -- app/version.sh@14 -- # cut -f2 00:06:56.358 05:08:49 version -- app/version.sh@14 -- # tr -d '"' 00:06:56.358 05:08:49 version -- app/version.sh@18 -- # minor=9 00:06:56.358 05:08:49 version -- app/version.sh@19 -- # get_header_version patch 00:06:56.358 05:08:49 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:56.358 05:08:49 version -- app/version.sh@14 -- # cut -f2 00:06:56.358 05:08:49 version -- app/version.sh@14 -- # tr -d '"' 00:06:56.358 05:08:49 version -- app/version.sh@19 -- # patch=1 00:06:56.358 05:08:49 version -- app/version.sh@20 -- # get_header_version suffix 00:06:56.358 05:08:49 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:56.358 05:08:49 version -- app/version.sh@14 -- # cut -f2 00:06:56.358 05:08:49 version -- app/version.sh@14 -- # tr -d '"' 00:06:56.358 05:08:49 version -- app/version.sh@20 -- # suffix=-pre 00:06:56.358 05:08:49 version -- app/version.sh@22 -- # version=24.9 00:06:56.358 05:08:49 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:56.358 05:08:49 version -- app/version.sh@25 -- # version=24.9.1 00:06:56.358 05:08:49 version -- app/version.sh@28 -- # version=24.9.1rc0 00:06:56.358 05:08:49 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:56.358 05:08:49 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:56.358 05:08:49 version -- app/version.sh@30 -- # py_version=24.9.1rc0 00:06:56.358 05:08:49 version -- app/version.sh@31 -- # [[ 24.9.1rc0 == \2\4\.\9\.\1\r\c\0 ]] 00:06:56.358 00:06:56.358 real 0m0.170s 00:06:56.358 user 0m0.124s 00:06:56.358 sys 0m0.071s 00:06:56.358 05:08:49 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:56.358 05:08:49 version -- common/autotest_common.sh@10 -- # set +x 00:06:56.358 ************************************ 00:06:56.358 END TEST version 00:06:56.358 ************************************ 00:06:56.358 05:08:49 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:56.358 05:08:49 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:56.358 05:08:49 -- spdk/autotest.sh@194 -- # uname -s 00:06:56.358 05:08:49 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:56.358 05:08:49 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:56.358 05:08:49 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:56.358 05:08:49 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:06:56.358 05:08:49 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:56.358 05:08:49 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:56.358 05:08:49 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:56.358 05:08:49 -- common/autotest_common.sh@10 -- # set +x 00:06:56.358 ************************************ 00:06:56.358 START TEST blockdev_nvme 00:06:56.358 ************************************ 00:06:56.358 05:08:49 blockdev_nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:56.358 * Looking for test storage... 00:06:56.358 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:56.358 05:08:49 blockdev_nvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:56.358 05:08:49 blockdev_nvme -- common/autotest_common.sh@1681 -- # lcov --version 00:06:56.358 05:08:49 blockdev_nvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:56.358 05:08:49 blockdev_nvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:56.358 05:08:49 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:56.359 05:08:49 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:56.359 05:08:49 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:56.359 05:08:49 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:06:56.359 05:08:49 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:06:56.359 05:08:49 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:06:56.359 05:08:49 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:06:56.359 05:08:49 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:06:56.359 05:08:49 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:06:56.359 05:08:49 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:06:56.359 05:08:49 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:56.359 05:08:49 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:06:56.359 05:08:49 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:06:56.359 05:08:49 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:56.359 05:08:49 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:56.359 05:08:49 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:06:56.359 05:08:49 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:06:56.359 05:08:49 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:56.359 05:08:49 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:06:56.359 05:08:49 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:06:56.359 05:08:49 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:06:56.359 05:08:49 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:06:56.359 05:08:49 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:56.359 05:08:49 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:06:56.359 05:08:49 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:06:56.359 05:08:49 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:56.359 05:08:49 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:56.359 05:08:49 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:06:56.359 05:08:49 blockdev_nvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:56.359 05:08:49 blockdev_nvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:56.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.359 --rc genhtml_branch_coverage=1 00:06:56.359 --rc genhtml_function_coverage=1 00:06:56.359 --rc genhtml_legend=1 00:06:56.359 --rc geninfo_all_blocks=1 00:06:56.359 --rc geninfo_unexecuted_blocks=1 00:06:56.359 00:06:56.359 ' 00:06:56.359 05:08:49 blockdev_nvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:56.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.359 --rc genhtml_branch_coverage=1 00:06:56.359 --rc genhtml_function_coverage=1 00:06:56.359 --rc genhtml_legend=1 00:06:56.359 --rc geninfo_all_blocks=1 00:06:56.359 --rc geninfo_unexecuted_blocks=1 00:06:56.359 00:06:56.359 ' 00:06:56.359 05:08:49 blockdev_nvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:56.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.359 --rc genhtml_branch_coverage=1 00:06:56.359 --rc genhtml_function_coverage=1 00:06:56.359 --rc genhtml_legend=1 00:06:56.359 --rc geninfo_all_blocks=1 00:06:56.359 --rc geninfo_unexecuted_blocks=1 00:06:56.359 00:06:56.359 ' 00:06:56.359 05:08:49 blockdev_nvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:56.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.359 --rc genhtml_branch_coverage=1 00:06:56.359 --rc genhtml_function_coverage=1 00:06:56.359 --rc genhtml_legend=1 00:06:56.359 --rc geninfo_all_blocks=1 00:06:56.359 --rc geninfo_unexecuted_blocks=1 00:06:56.359 00:06:56.359 ' 00:06:56.359 05:08:49 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:56.359 05:08:49 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:06:56.359 05:08:49 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:56.359 05:08:49 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:56.359 05:08:49 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:56.359 05:08:49 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:56.359 05:08:49 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:56.359 05:08:49 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:56.359 05:08:49 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:06:56.359 05:08:49 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:06:56.359 05:08:49 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:06:56.359 05:08:49 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:06:56.359 05:08:49 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:06:56.620 05:08:49 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:06:56.620 05:08:49 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:06:56.620 05:08:49 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:06:56.620 05:08:49 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:06:56.620 05:08:49 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:06:56.620 05:08:49 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:06:56.620 05:08:49 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:06:56.620 05:08:49 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:06:56.620 05:08:49 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:06:56.620 05:08:49 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:06:56.620 05:08:49 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:06:56.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.620 05:08:49 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=72177 00:06:56.620 05:08:49 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:56.620 05:08:49 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 72177 00:06:56.620 05:08:49 blockdev_nvme -- common/autotest_common.sh@831 -- # '[' -z 72177 ']' 00:06:56.620 05:08:49 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:56.620 05:08:49 blockdev_nvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.620 05:08:49 blockdev_nvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:56.620 05:08:49 blockdev_nvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.620 05:08:49 blockdev_nvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:56.620 05:08:49 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:56.620 [2024-11-10 05:08:49.664370] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:56.620 [2024-11-10 05:08:49.664484] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72177 ] 00:06:56.620 [2024-11-10 05:08:49.811807] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.620 [2024-11-10 05:08:49.843723] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.555 05:08:50 blockdev_nvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:57.555 05:08:50 blockdev_nvme -- common/autotest_common.sh@864 -- # return 0 00:06:57.555 05:08:50 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:06:57.555 05:08:50 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:06:57.555 05:08:50 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:06:57.555 05:08:50 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:57.555 05:08:50 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:57.556 05:08:50 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:57.556 05:08:50 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.556 05:08:50 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:57.815 05:08:50 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.815 05:08:50 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:06:57.815 05:08:50 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.815 05:08:50 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:57.815 05:08:50 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.815 05:08:50 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:06:57.815 05:08:50 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:06:57.815 05:08:50 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.815 05:08:50 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:57.815 05:08:50 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.815 05:08:50 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:06:57.815 05:08:50 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.815 05:08:50 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:57.815 05:08:50 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.815 05:08:50 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:57.815 05:08:50 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.815 05:08:50 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:57.815 05:08:50 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.815 05:08:50 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:06:57.815 05:08:50 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:06:57.815 05:08:50 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.815 05:08:50 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:57.815 05:08:50 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:06:57.815 05:08:50 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.815 05:08:50 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:06:57.816 05:08:50 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "8fbe0fc7-7ad0-4f7e-87ed-50cd504d6a04"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "8fbe0fc7-7ad0-4f7e-87ed-50cd504d6a04",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "169b9823-03b6-41a3-9b9f-64abeccedeef"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "169b9823-03b6-41a3-9b9f-64abeccedeef",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "b9303db5-9e00-4c44-83bf-733d6912681a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b9303db5-9e00-4c44-83bf-733d6912681a",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "8963b7f7-dd30-4ed2-91b3-e83aec844985"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "8963b7f7-dd30-4ed2-91b3-e83aec844985",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "e66a6d02-2883-4d5d-a868-980e10f1e75c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e66a6d02-2883-4d5d-a868-980e10f1e75c",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "79123642-e428-4fe4-8049-3deac15ab5fa"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "79123642-e428-4fe4-8049-3deac15ab5fa",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:57.816 05:08:50 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:06:57.816 05:08:50 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:06:57.816 05:08:50 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:06:57.816 05:08:50 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:06:57.816 05:08:50 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 72177 00:06:57.816 05:08:50 blockdev_nvme -- common/autotest_common.sh@950 -- # '[' -z 72177 ']' 00:06:57.816 05:08:50 blockdev_nvme -- common/autotest_common.sh@954 -- # kill -0 72177 00:06:57.816 05:08:50 blockdev_nvme -- common/autotest_common.sh@955 -- # uname 00:06:57.816 05:08:50 blockdev_nvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:57.816 05:08:50 blockdev_nvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72177 00:06:57.816 05:08:50 blockdev_nvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:57.816 05:08:50 blockdev_nvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:57.816 killing process with pid 72177 00:06:57.816 05:08:50 blockdev_nvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72177' 00:06:57.816 05:08:50 blockdev_nvme -- common/autotest_common.sh@969 -- # kill 72177 00:06:57.816 05:08:50 blockdev_nvme -- common/autotest_common.sh@974 -- # wait 72177 00:06:58.074 05:08:51 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:58.074 05:08:51 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:58.074 05:08:51 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:06:58.074 05:08:51 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:58.074 05:08:51 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:58.074 ************************************ 00:06:58.074 START TEST bdev_hello_world 00:06:58.074 ************************************ 00:06:58.074 05:08:51 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:58.333 [2024-11-10 05:08:51.322594] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:58.333 [2024-11-10 05:08:51.322708] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72244 ] 00:06:58.333 [2024-11-10 05:08:51.470352] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.333 [2024-11-10 05:08:51.501887] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.900 [2024-11-10 05:08:51.868500] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:58.900 [2024-11-10 05:08:51.868543] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:58.900 [2024-11-10 05:08:51.868562] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:58.900 [2024-11-10 05:08:51.870617] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:58.900 [2024-11-10 05:08:51.871161] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:58.900 [2024-11-10 05:08:51.871187] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:58.900 [2024-11-10 05:08:51.871441] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:58.900 00:06:58.900 [2024-11-10 05:08:51.871463] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:58.900 00:06:58.900 real 0m0.760s 00:06:58.900 user 0m0.488s 00:06:58.900 sys 0m0.168s 00:06:58.900 05:08:52 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:58.900 ************************************ 00:06:58.900 05:08:52 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:58.900 END TEST bdev_hello_world 00:06:58.900 ************************************ 00:06:58.900 05:08:52 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:06:58.900 05:08:52 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:58.900 05:08:52 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:58.900 05:08:52 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:58.900 ************************************ 00:06:58.900 START TEST bdev_bounds 00:06:58.900 ************************************ 00:06:58.900 05:08:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:06:58.900 05:08:52 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=72275 00:06:58.900 05:08:52 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:58.900 Process bdevio pid: 72275 00:06:58.900 05:08:52 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 72275' 00:06:58.900 05:08:52 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 72275 00:06:58.900 05:08:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 72275 ']' 00:06:58.900 05:08:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.900 05:08:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:58.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.900 05:08:52 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:58.900 05:08:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.900 05:08:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:58.900 05:08:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:58.900 [2024-11-10 05:08:52.128391] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:58.900 [2024-11-10 05:08:52.128515] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72275 ] 00:06:59.158 [2024-11-10 05:08:52.279326] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:59.158 [2024-11-10 05:08:52.314524] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:59.158 [2024-11-10 05:08:52.314866] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:59.158 [2024-11-10 05:08:52.315014] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.092 05:08:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:00.092 05:08:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:07:00.092 05:08:52 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:00.092 I/O targets: 00:07:00.092 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:00.092 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:07:00.092 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:00.092 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:00.092 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:00.092 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:00.092 00:07:00.092 00:07:00.092 CUnit - A unit testing framework for C - Version 2.1-3 00:07:00.092 http://cunit.sourceforge.net/ 00:07:00.092 00:07:00.092 00:07:00.092 Suite: bdevio tests on: Nvme3n1 00:07:00.092 Test: blockdev write read block ...passed 00:07:00.092 Test: blockdev write zeroes read block ...passed 00:07:00.092 Test: blockdev write zeroes read no split ...passed 00:07:00.092 Test: blockdev write zeroes read split ...passed 00:07:00.092 Test: blockdev write zeroes read split partial ...passed 00:07:00.092 Test: blockdev reset ...[2024-11-10 05:08:53.081972] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:07:00.092 [2024-11-10 05:08:53.086784] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:00.092 passed 00:07:00.092 Test: blockdev write read 8 blocks ...passed 00:07:00.092 Test: blockdev write read size > 128k ...passed 00:07:00.092 Test: blockdev write read invalid size ...passed 00:07:00.092 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:00.092 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:00.092 Test: blockdev write read max offset ...passed 00:07:00.092 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:00.092 Test: blockdev writev readv 8 blocks ...passed 00:07:00.092 Test: blockdev writev readv 30 x 1block ...passed 00:07:00.092 Test: blockdev writev readv block ...passed 00:07:00.092 Test: blockdev writev readv size > 128k ...passed 00:07:00.092 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:00.092 Test: blockdev comparev and writev ...[2024-11-10 05:08:53.103770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2a7a06000 len:0x1000 00:07:00.092 [2024-11-10 05:08:53.103818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:00.092 passed 00:07:00.092 Test: blockdev nvme passthru rw ...passed 00:07:00.092 Test: blockdev nvme passthru vendor specific ...passed 00:07:00.092 Test: blockdev nvme admin passthru ...[2024-11-10 05:08:53.104361] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:00.092 [2024-11-10 05:08:53.104387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:00.092 passed 00:07:00.092 Test: blockdev copy ...passed 00:07:00.092 Suite: bdevio tests on: Nvme2n3 00:07:00.092 Test: blockdev write read block ...passed 00:07:00.092 Test: blockdev write zeroes read block ...passed 00:07:00.092 Test: blockdev write zeroes read no split ...passed 00:07:00.092 Test: blockdev write zeroes read split ...passed 00:07:00.092 Test: blockdev write zeroes read split partial ...passed 00:07:00.092 Test: blockdev reset ...[2024-11-10 05:08:53.132816] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:00.092 [2024-11-10 05:08:53.136311] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:00.092 passed 00:07:00.092 Test: blockdev write read 8 blocks ...passed 00:07:00.092 Test: blockdev write read size > 128k ...passed 00:07:00.092 Test: blockdev write read invalid size ...passed 00:07:00.092 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:00.092 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:00.092 Test: blockdev write read max offset ...passed 00:07:00.092 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:00.092 Test: blockdev writev readv 8 blocks ...passed 00:07:00.092 Test: blockdev writev readv 30 x 1block ...passed 00:07:00.092 Test: blockdev writev readv block ...passed 00:07:00.092 Test: blockdev writev readv size > 128k ...passed 00:07:00.092 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:00.092 Test: blockdev comparev and writev ...[2024-11-10 05:08:53.152289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d7e05000 len:0x1000 00:07:00.092 [2024-11-10 05:08:53.152332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:00.092 passed 00:07:00.092 Test: blockdev nvme passthru rw ...passed 00:07:00.092 Test: blockdev nvme passthru vendor specific ...[2024-11-10 05:08:53.154406] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:00.092 [2024-11-10 05:08:53.154434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:00.092 passed 00:07:00.092 Test: blockdev nvme admin passthru ...passed 00:07:00.092 Test: blockdev copy ...passed 00:07:00.092 Suite: bdevio tests on: Nvme2n2 00:07:00.092 Test: blockdev write read block ...passed 00:07:00.092 Test: blockdev write zeroes read block ...passed 00:07:00.092 Test: blockdev write zeroes read no split ...passed 00:07:00.092 Test: blockdev write zeroes read split ...passed 00:07:00.092 Test: blockdev write zeroes read split partial ...passed 00:07:00.092 Test: blockdev reset ...[2024-11-10 05:08:53.180732] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:00.092 passed 00:07:00.092 Test: blockdev write read 8 blocks ...[2024-11-10 05:08:53.182555] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:00.092 passed 00:07:00.092 Test: blockdev write read size > 128k ...passed 00:07:00.092 Test: blockdev write read invalid size ...passed 00:07:00.092 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:00.092 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:00.092 Test: blockdev write read max offset ...passed 00:07:00.092 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:00.092 Test: blockdev writev readv 8 blocks ...passed 00:07:00.092 Test: blockdev writev readv 30 x 1block ...passed 00:07:00.092 Test: blockdev writev readv block ...passed 00:07:00.092 Test: blockdev writev readv size > 128k ...passed 00:07:00.092 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:00.092 Test: blockdev comparev and writev ...[2024-11-10 05:08:53.192113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d8236000 len:0x1000 00:07:00.092 [2024-11-10 05:08:53.192151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:00.092 passed 00:07:00.092 Test: blockdev nvme passthru rw ...passed 00:07:00.092 Test: blockdev nvme passthru vendor specific ...[2024-11-10 05:08:53.193533] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:00.092 [2024-11-10 05:08:53.193559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:00.092 passed 00:07:00.092 Test: blockdev nvme admin passthru ...passed 00:07:00.092 Test: blockdev copy ...passed 00:07:00.092 Suite: bdevio tests on: Nvme2n1 00:07:00.092 Test: blockdev write read block ...passed 00:07:00.092 Test: blockdev write zeroes read block ...passed 00:07:00.092 Test: blockdev write zeroes read no split ...passed 00:07:00.092 Test: blockdev write zeroes read split ...passed 00:07:00.092 Test: blockdev write zeroes read split partial ...passed 00:07:00.092 Test: blockdev reset ...[2024-11-10 05:08:53.218216] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:00.092 [2024-11-10 05:08:53.220120] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:00.092 passed 00:07:00.092 Test: blockdev write read 8 blocks ...passed 00:07:00.092 Test: blockdev write read size > 128k ...passed 00:07:00.092 Test: blockdev write read invalid size ...passed 00:07:00.092 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:00.092 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:00.092 Test: blockdev write read max offset ...passed 00:07:00.092 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:00.092 Test: blockdev writev readv 8 blocks ...passed 00:07:00.092 Test: blockdev writev readv 30 x 1block ...passed 00:07:00.092 Test: blockdev writev readv block ...passed 00:07:00.092 Test: blockdev writev readv size > 128k ...passed 00:07:00.092 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:00.092 Test: blockdev comparev and writev ...[2024-11-10 05:08:53.229194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d8230000 len:0x1000 00:07:00.092 passed 00:07:00.092 Test: blockdev nvme passthru rw ...[2024-11-10 05:08:53.229233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:00.092 passed 00:07:00.092 Test: blockdev nvme passthru vendor specific ...[2024-11-10 05:08:53.229763] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:00.092 [2024-11-10 05:08:53.229787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:00.092 passed 00:07:00.092 Test: blockdev nvme admin passthru ...passed 00:07:00.092 Test: blockdev copy ...passed 00:07:00.092 Suite: bdevio tests on: Nvme1n1 00:07:00.092 Test: blockdev write read block ...passed 00:07:00.092 Test: blockdev write zeroes read block ...passed 00:07:00.092 Test: blockdev write zeroes read no split ...passed 00:07:00.092 Test: blockdev write zeroes read split ...passed 00:07:00.092 Test: blockdev write zeroes read split partial ...passed 00:07:00.092 Test: blockdev reset ...[2024-11-10 05:08:53.251986] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:07:00.093 [2024-11-10 05:08:53.253470] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:00.093 passed 00:07:00.093 Test: blockdev write read 8 blocks ...passed 00:07:00.093 Test: blockdev write read size > 128k ...passed 00:07:00.093 Test: blockdev write read invalid size ...passed 00:07:00.093 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:00.093 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:00.093 Test: blockdev write read max offset ...passed 00:07:00.093 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:00.093 Test: blockdev writev readv 8 blocks ...passed 00:07:00.093 Test: blockdev writev readv 30 x 1block ...passed 00:07:00.093 Test: blockdev writev readv block ...passed 00:07:00.093 Test: blockdev writev readv size > 128k ...passed 00:07:00.093 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:00.093 Test: blockdev comparev and writev ...[2024-11-10 05:08:53.264590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d822c000 len:0x1000 00:07:00.093 [2024-11-10 05:08:53.264629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:00.093 passed 00:07:00.093 Test: blockdev nvme passthru rw ...passed 00:07:00.093 Test: blockdev nvme passthru vendor specific ...passed 00:07:00.093 Test: blockdev nvme admin passthru ...[2024-11-10 05:08:53.265924] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:00.093 [2024-11-10 05:08:53.265949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:00.093 passed 00:07:00.093 Test: blockdev copy ...passed 00:07:00.093 Suite: bdevio tests on: Nvme0n1 00:07:00.093 Test: blockdev write read block ...passed 00:07:00.093 Test: blockdev write zeroes read block ...passed 00:07:00.093 Test: blockdev write zeroes read no split ...passed 00:07:00.093 Test: blockdev write zeroes read split ...passed 00:07:00.093 Test: blockdev write zeroes read split partial ...passed 00:07:00.093 Test: blockdev reset ...[2024-11-10 05:08:53.288954] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:07:00.093 [2024-11-10 05:08:53.290540] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:00.093 passed 00:07:00.093 Test: blockdev write read 8 blocks ...passed 00:07:00.093 Test: blockdev write read size > 128k ...passed 00:07:00.093 Test: blockdev write read invalid size ...passed 00:07:00.093 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:00.093 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:00.093 Test: blockdev write read max offset ...passed 00:07:00.093 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:00.093 Test: blockdev writev readv 8 blocks ...passed 00:07:00.093 Test: blockdev writev readv 30 x 1block ...passed 00:07:00.093 Test: blockdev writev readv block ...passed 00:07:00.093 Test: blockdev writev readv size > 128k ...passed 00:07:00.093 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:00.093 Test: blockdev comparev and writev ...passed 00:07:00.093 Test: blockdev nvme passthru rw ...[2024-11-10 05:08:53.296385] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:00.093 separate metadata which is not supported yet. 00:07:00.093 passed 00:07:00.093 Test: blockdev nvme passthru vendor specific ...passed 00:07:00.093 Test: blockdev nvme admin passthru ...[2024-11-10 05:08:53.296937] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:00.093 [2024-11-10 05:08:53.296972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:00.093 passed 00:07:00.093 Test: blockdev copy ...passed 00:07:00.093 00:07:00.093 Run Summary: Type Total Ran Passed Failed Inactive 00:07:00.093 suites 6 6 n/a 0 0 00:07:00.093 tests 138 138 138 0 0 00:07:00.093 asserts 893 893 893 0 n/a 00:07:00.093 00:07:00.093 Elapsed time = 0.540 seconds 00:07:00.093 0 00:07:00.093 05:08:53 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 72275 00:07:00.093 05:08:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 72275 ']' 00:07:00.093 05:08:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 72275 00:07:00.093 05:08:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:07:00.093 05:08:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:00.351 05:08:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72275 00:07:00.351 05:08:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:00.351 05:08:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:00.351 05:08:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72275' 00:07:00.351 killing process with pid 72275 00:07:00.351 05:08:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 72275 00:07:00.351 05:08:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 72275 00:07:00.351 05:08:53 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:00.351 00:07:00.351 real 0m1.428s 00:07:00.351 user 0m3.546s 00:07:00.351 sys 0m0.273s 00:07:00.351 05:08:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:00.351 ************************************ 00:07:00.351 END TEST bdev_bounds 00:07:00.351 ************************************ 00:07:00.351 05:08:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:00.351 05:08:53 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:00.351 05:08:53 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:00.351 05:08:53 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:00.351 05:08:53 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:00.351 ************************************ 00:07:00.351 START TEST bdev_nbd 00:07:00.351 ************************************ 00:07:00.351 05:08:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:00.351 05:08:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:00.351 05:08:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:00.351 05:08:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:00.352 05:08:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:00.352 05:08:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:00.352 05:08:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:00.352 05:08:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:07:00.352 05:08:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:00.352 05:08:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:00.352 05:08:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:00.352 05:08:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:07:00.352 05:08:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:00.352 05:08:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:00.352 05:08:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:00.352 05:08:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:00.352 05:08:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=72318 00:07:00.352 05:08:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:00.352 05:08:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 72318 /var/tmp/spdk-nbd.sock 00:07:00.352 05:08:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 72318 ']' 00:07:00.352 05:08:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:00.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:00.352 05:08:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:00.352 05:08:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:00.352 05:08:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:00.352 05:08:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:00.352 05:08:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:00.610 [2024-11-10 05:08:53.618249] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:00.610 [2024-11-10 05:08:53.618646] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:00.610 [2024-11-10 05:08:53.767266] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.610 [2024-11-10 05:08:53.800180] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.543 05:08:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:01.543 05:08:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:07:01.543 05:08:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:01.543 05:08:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.543 05:08:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:01.543 05:08:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:01.543 05:08:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:01.543 05:08:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.543 05:08:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:01.543 05:08:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:01.543 05:08:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:01.543 05:08:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:01.543 05:08:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:01.543 05:08:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:01.543 05:08:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:01.543 05:08:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:01.543 05:08:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:01.543 05:08:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:01.543 05:08:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:01.543 05:08:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:01.543 05:08:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:01.543 05:08:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:01.543 05:08:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:01.543 05:08:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:01.543 05:08:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:01.543 05:08:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:01.543 05:08:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:01.543 1+0 records in 00:07:01.543 1+0 records out 00:07:01.543 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000437982 s, 9.4 MB/s 00:07:01.543 05:08:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.543 05:08:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:01.543 05:08:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.543 05:08:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:01.543 05:08:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:01.543 05:08:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:01.543 05:08:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:01.543 05:08:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:07:01.801 05:08:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:01.801 05:08:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:01.801 05:08:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:01.801 05:08:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:01.801 05:08:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:01.801 05:08:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:01.801 05:08:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:01.801 05:08:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:01.801 05:08:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:01.801 05:08:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:01.801 05:08:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:01.801 05:08:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:01.801 1+0 records in 00:07:01.801 1+0 records out 00:07:01.801 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00116079 s, 3.5 MB/s 00:07:01.801 05:08:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.801 05:08:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:01.801 05:08:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.802 05:08:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:01.802 05:08:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:01.802 05:08:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:01.802 05:08:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:01.802 05:08:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:02.060 05:08:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:02.060 05:08:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:02.060 05:08:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:02.060 05:08:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:07:02.060 05:08:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:02.060 05:08:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:02.060 05:08:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:02.060 05:08:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:07:02.060 05:08:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:02.060 05:08:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:02.060 05:08:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:02.060 05:08:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:02.060 1+0 records in 00:07:02.060 1+0 records out 00:07:02.060 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00113446 s, 3.6 MB/s 00:07:02.060 05:08:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:02.060 05:08:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:02.060 05:08:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:02.060 05:08:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:02.060 05:08:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:02.060 05:08:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:02.060 05:08:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:02.060 05:08:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:02.318 05:08:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:02.318 05:08:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:02.318 05:08:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:02.318 05:08:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:07:02.318 05:08:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:02.318 05:08:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:02.318 05:08:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:02.318 05:08:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:07:02.318 05:08:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:02.318 05:08:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:02.318 05:08:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:02.318 05:08:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:02.318 1+0 records in 00:07:02.318 1+0 records out 00:07:02.318 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000367519 s, 11.1 MB/s 00:07:02.318 05:08:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:02.318 05:08:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:02.318 05:08:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:02.318 05:08:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:02.318 05:08:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:02.318 05:08:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:02.318 05:08:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:02.318 05:08:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:02.576 05:08:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:02.576 05:08:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:02.576 05:08:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:02.576 05:08:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:07:02.576 05:08:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:02.576 05:08:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:02.576 05:08:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:02.576 05:08:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:07:02.576 05:08:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:02.576 05:08:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:02.576 05:08:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:02.576 05:08:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:02.576 1+0 records in 00:07:02.576 1+0 records out 00:07:02.576 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00081322 s, 5.0 MB/s 00:07:02.576 05:08:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:02.576 05:08:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:02.576 05:08:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:02.576 05:08:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:02.576 05:08:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:02.576 05:08:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:02.576 05:08:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:02.576 05:08:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:02.834 05:08:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:02.834 05:08:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:02.834 05:08:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:02.834 05:08:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:07:02.834 05:08:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:02.834 05:08:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:02.834 05:08:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:02.834 05:08:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:07:02.834 05:08:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:02.834 05:08:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:02.834 05:08:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:02.834 05:08:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:02.834 1+0 records in 00:07:02.834 1+0 records out 00:07:02.834 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00122084 s, 3.4 MB/s 00:07:02.834 05:08:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:02.834 05:08:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:02.834 05:08:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:02.834 05:08:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:02.834 05:08:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:02.834 05:08:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:02.834 05:08:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:02.834 05:08:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:02.834 05:08:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:02.834 { 00:07:02.834 "nbd_device": "/dev/nbd0", 00:07:02.834 "bdev_name": "Nvme0n1" 00:07:02.834 }, 00:07:02.834 { 00:07:02.834 "nbd_device": "/dev/nbd1", 00:07:02.834 "bdev_name": "Nvme1n1" 00:07:02.834 }, 00:07:02.834 { 00:07:02.834 "nbd_device": "/dev/nbd2", 00:07:02.834 "bdev_name": "Nvme2n1" 00:07:02.834 }, 00:07:02.834 { 00:07:02.834 "nbd_device": "/dev/nbd3", 00:07:02.834 "bdev_name": "Nvme2n2" 00:07:02.834 }, 00:07:02.834 { 00:07:02.834 "nbd_device": "/dev/nbd4", 00:07:02.834 "bdev_name": "Nvme2n3" 00:07:02.834 }, 00:07:02.834 { 00:07:02.835 "nbd_device": "/dev/nbd5", 00:07:02.835 "bdev_name": "Nvme3n1" 00:07:02.835 } 00:07:02.835 ]' 00:07:02.835 05:08:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:02.835 05:08:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:02.835 { 00:07:02.835 "nbd_device": "/dev/nbd0", 00:07:02.835 "bdev_name": "Nvme0n1" 00:07:02.835 }, 00:07:02.835 { 00:07:02.835 "nbd_device": "/dev/nbd1", 00:07:02.835 "bdev_name": "Nvme1n1" 00:07:02.835 }, 00:07:02.835 { 00:07:02.835 "nbd_device": "/dev/nbd2", 00:07:02.835 "bdev_name": "Nvme2n1" 00:07:02.835 }, 00:07:02.835 { 00:07:02.835 "nbd_device": "/dev/nbd3", 00:07:02.835 "bdev_name": "Nvme2n2" 00:07:02.835 }, 00:07:02.835 { 00:07:02.835 "nbd_device": "/dev/nbd4", 00:07:02.835 "bdev_name": "Nvme2n3" 00:07:02.835 }, 00:07:02.835 { 00:07:02.835 "nbd_device": "/dev/nbd5", 00:07:02.835 "bdev_name": "Nvme3n1" 00:07:02.835 } 00:07:02.835 ]' 00:07:02.835 05:08:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:02.835 05:08:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:07:02.835 05:08:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:02.835 05:08:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:07:02.835 05:08:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:02.835 05:08:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:02.835 05:08:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:02.835 05:08:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:03.093 05:08:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:03.093 05:08:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:03.093 05:08:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:03.093 05:08:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:03.093 05:08:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:03.093 05:08:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:03.093 05:08:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:03.093 05:08:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:03.093 05:08:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:03.093 05:08:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:03.351 05:08:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:03.351 05:08:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:03.351 05:08:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:03.351 05:08:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:03.351 05:08:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:03.351 05:08:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:03.351 05:08:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:03.351 05:08:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:03.351 05:08:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:03.351 05:08:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:03.611 05:08:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:03.611 05:08:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:03.611 05:08:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:03.611 05:08:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:03.611 05:08:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:03.611 05:08:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:03.611 05:08:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:03.611 05:08:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:03.611 05:08:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:03.611 05:08:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:03.871 05:08:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:03.871 05:08:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:03.871 05:08:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:03.871 05:08:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:03.871 05:08:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:03.871 05:08:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:03.871 05:08:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:03.871 05:08:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:03.871 05:08:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:03.871 05:08:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:04.129 05:08:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:04.129 05:08:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:04.129 05:08:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:04.129 05:08:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:04.129 05:08:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:04.129 05:08:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:04.129 05:08:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:04.129 05:08:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:04.129 05:08:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:04.130 05:08:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:04.130 05:08:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:04.130 05:08:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:04.130 05:08:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:04.130 05:08:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:04.130 05:08:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:04.130 05:08:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:04.130 05:08:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:04.130 05:08:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:04.130 05:08:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:04.391 05:08:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.391 05:08:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:04.391 05:08:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:04.391 05:08:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:04.391 05:08:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:04.391 05:08:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:04.391 05:08:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:04.391 05:08:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:04.391 05:08:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:04.391 05:08:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:04.391 05:08:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:04.391 05:08:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:04.391 05:08:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:04.391 05:08:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:04.391 05:08:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:04.391 05:08:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.391 05:08:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:04.391 05:08:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:04.391 05:08:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:04.391 05:08:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:04.391 05:08:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:04.391 05:08:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.391 05:08:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:04.391 05:08:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:04.391 05:08:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:04.391 05:08:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:04.391 05:08:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:04.391 05:08:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:04.391 05:08:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:04.391 05:08:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:04.652 /dev/nbd0 00:07:04.652 05:08:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:04.652 05:08:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:04.652 05:08:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:04.652 05:08:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:04.652 05:08:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:04.652 05:08:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:04.652 05:08:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:04.652 05:08:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:04.652 05:08:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:04.652 05:08:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:04.652 05:08:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:04.652 1+0 records in 00:07:04.652 1+0 records out 00:07:04.652 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000917918 s, 4.5 MB/s 00:07:04.652 05:08:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.652 05:08:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:04.652 05:08:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.652 05:08:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:04.652 05:08:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:04.652 05:08:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:04.652 05:08:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:04.652 05:08:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:07:04.913 /dev/nbd1 00:07:04.913 05:08:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:04.913 05:08:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:04.913 05:08:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:04.913 05:08:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:04.913 05:08:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:04.913 05:08:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:04.913 05:08:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:04.913 05:08:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:04.913 05:08:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:04.913 05:08:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:04.913 05:08:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:04.913 1+0 records in 00:07:04.913 1+0 records out 00:07:04.913 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00118091 s, 3.5 MB/s 00:07:04.913 05:08:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.913 05:08:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:04.913 05:08:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.913 05:08:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:04.913 05:08:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:04.913 05:08:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:04.913 05:08:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:04.913 05:08:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:07:05.199 /dev/nbd10 00:07:05.199 05:08:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:05.199 05:08:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:05.199 05:08:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:07:05.199 05:08:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:05.199 05:08:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:05.199 05:08:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:05.199 05:08:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:07:05.199 05:08:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:05.199 05:08:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:05.199 05:08:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:05.199 05:08:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:05.199 1+0 records in 00:07:05.199 1+0 records out 00:07:05.199 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000675566 s, 6.1 MB/s 00:07:05.199 05:08:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.199 05:08:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:05.199 05:08:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.199 05:08:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:05.199 05:08:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:05.199 05:08:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:05.199 05:08:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:05.200 05:08:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:07:05.461 /dev/nbd11 00:07:05.461 05:08:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:05.461 05:08:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:05.461 05:08:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:07:05.461 05:08:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:05.461 05:08:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:05.461 05:08:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:05.461 05:08:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:07:05.461 05:08:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:05.461 05:08:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:05.461 05:08:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:05.461 05:08:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:05.461 1+0 records in 00:07:05.461 1+0 records out 00:07:05.461 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000825517 s, 5.0 MB/s 00:07:05.461 05:08:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.461 05:08:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:05.461 05:08:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.461 05:08:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:05.461 05:08:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:05.461 05:08:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:05.461 05:08:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:05.461 05:08:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:07:05.722 /dev/nbd12 00:07:05.722 05:08:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:05.722 05:08:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:05.722 05:08:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:07:05.722 05:08:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:05.722 05:08:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:05.722 05:08:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:05.722 05:08:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:07:05.722 05:08:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:05.722 05:08:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:05.722 05:08:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:05.722 05:08:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:05.722 1+0 records in 00:07:05.722 1+0 records out 00:07:05.722 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000334591 s, 12.2 MB/s 00:07:05.722 05:08:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.722 05:08:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:05.722 05:08:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.722 05:08:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:05.722 05:08:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:05.722 05:08:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:05.722 05:08:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:05.722 05:08:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:07:05.981 /dev/nbd13 00:07:05.981 05:08:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:05.981 05:08:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:05.981 05:08:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:07:05.981 05:08:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:05.981 05:08:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:05.981 05:08:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:05.981 05:08:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:07:05.981 05:08:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:05.981 05:08:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:05.981 05:08:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:05.981 05:08:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:05.981 1+0 records in 00:07:05.981 1+0 records out 00:07:05.981 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000366968 s, 11.2 MB/s 00:07:05.981 05:08:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.981 05:08:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:05.981 05:08:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.981 05:08:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:05.981 05:08:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:05.981 05:08:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:05.981 05:08:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:05.981 05:08:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:05.982 05:08:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.982 05:08:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:05.982 05:08:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:05.982 { 00:07:05.982 "nbd_device": "/dev/nbd0", 00:07:05.982 "bdev_name": "Nvme0n1" 00:07:05.982 }, 00:07:05.982 { 00:07:05.982 "nbd_device": "/dev/nbd1", 00:07:05.982 "bdev_name": "Nvme1n1" 00:07:05.982 }, 00:07:05.982 { 00:07:05.982 "nbd_device": "/dev/nbd10", 00:07:05.982 "bdev_name": "Nvme2n1" 00:07:05.982 }, 00:07:05.982 { 00:07:05.982 "nbd_device": "/dev/nbd11", 00:07:05.982 "bdev_name": "Nvme2n2" 00:07:05.982 }, 00:07:05.982 { 00:07:05.982 "nbd_device": "/dev/nbd12", 00:07:05.982 "bdev_name": "Nvme2n3" 00:07:05.982 }, 00:07:05.982 { 00:07:05.982 "nbd_device": "/dev/nbd13", 00:07:05.982 "bdev_name": "Nvme3n1" 00:07:05.982 } 00:07:05.982 ]' 00:07:05.982 05:08:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:05.982 05:08:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:05.982 { 00:07:05.982 "nbd_device": "/dev/nbd0", 00:07:05.982 "bdev_name": "Nvme0n1" 00:07:05.982 }, 00:07:05.982 { 00:07:05.982 "nbd_device": "/dev/nbd1", 00:07:05.982 "bdev_name": "Nvme1n1" 00:07:05.982 }, 00:07:05.982 { 00:07:05.982 "nbd_device": "/dev/nbd10", 00:07:05.982 "bdev_name": "Nvme2n1" 00:07:05.982 }, 00:07:05.982 { 00:07:05.982 "nbd_device": "/dev/nbd11", 00:07:05.982 "bdev_name": "Nvme2n2" 00:07:05.982 }, 00:07:05.982 { 00:07:05.982 "nbd_device": "/dev/nbd12", 00:07:05.982 "bdev_name": "Nvme2n3" 00:07:05.982 }, 00:07:05.982 { 00:07:05.982 "nbd_device": "/dev/nbd13", 00:07:05.982 "bdev_name": "Nvme3n1" 00:07:05.982 } 00:07:05.982 ]' 00:07:06.243 05:08:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:06.243 /dev/nbd1 00:07:06.243 /dev/nbd10 00:07:06.243 /dev/nbd11 00:07:06.243 /dev/nbd12 00:07:06.243 /dev/nbd13' 00:07:06.243 05:08:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:06.243 /dev/nbd1 00:07:06.243 /dev/nbd10 00:07:06.243 /dev/nbd11 00:07:06.243 /dev/nbd12 00:07:06.243 /dev/nbd13' 00:07:06.243 05:08:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:06.243 05:08:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:07:06.243 05:08:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:07:06.243 05:08:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:07:06.243 05:08:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:07:06.243 05:08:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:07:06.243 05:08:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:06.243 05:08:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:06.243 05:08:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:06.243 05:08:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:06.243 05:08:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:06.243 05:08:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:06.243 256+0 records in 00:07:06.243 256+0 records out 00:07:06.243 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00676125 s, 155 MB/s 00:07:06.243 05:08:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:06.243 05:08:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:06.243 256+0 records in 00:07:06.243 256+0 records out 00:07:06.243 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.063449 s, 16.5 MB/s 00:07:06.243 05:08:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:06.243 05:08:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:06.243 256+0 records in 00:07:06.243 256+0 records out 00:07:06.243 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0648151 s, 16.2 MB/s 00:07:06.243 05:08:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:06.243 05:08:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:06.243 256+0 records in 00:07:06.243 256+0 records out 00:07:06.243 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0662034 s, 15.8 MB/s 00:07:06.243 05:08:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:06.243 05:08:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:06.516 256+0 records in 00:07:06.516 256+0 records out 00:07:06.516 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0692755 s, 15.1 MB/s 00:07:06.516 05:08:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:06.516 05:08:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:06.516 256+0 records in 00:07:06.516 256+0 records out 00:07:06.516 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.091977 s, 11.4 MB/s 00:07:06.516 05:08:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:06.516 05:08:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:06.516 256+0 records in 00:07:06.516 256+0 records out 00:07:06.516 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0643936 s, 16.3 MB/s 00:07:06.516 05:08:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:07:06.516 05:08:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:06.516 05:08:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:06.516 05:08:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:06.516 05:08:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:06.516 05:08:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:06.516 05:08:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:06.516 05:08:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:06.516 05:08:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:06.516 05:08:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:06.516 05:08:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:06.516 05:08:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:06.516 05:08:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:06.516 05:08:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:06.516 05:08:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:06.516 05:08:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:06.516 05:08:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:06.516 05:08:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:06.516 05:08:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:06.516 05:08:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:06.516 05:08:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:06.516 05:08:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:06.516 05:08:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:06.516 05:08:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:06.516 05:08:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:06.516 05:08:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:06.516 05:08:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:06.780 05:08:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:06.780 05:08:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:06.780 05:08:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:06.780 05:08:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:06.780 05:08:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:06.780 05:08:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:06.780 05:08:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:06.780 05:08:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:06.780 05:08:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:06.780 05:08:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:07.038 05:09:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:07.038 05:09:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:07.038 05:09:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:07.038 05:09:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:07.038 05:09:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:07.038 05:09:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:07.038 05:09:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:07.038 05:09:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:07.038 05:09:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:07.038 05:09:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:07.299 05:09:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:07.299 05:09:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:07.299 05:09:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:07.299 05:09:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:07.299 05:09:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:07.299 05:09:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:07.299 05:09:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:07.299 05:09:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:07.299 05:09:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:07.299 05:09:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:07.559 05:09:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:07.559 05:09:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:07.559 05:09:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:07.560 05:09:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:07.560 05:09:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:07.560 05:09:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:07.560 05:09:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:07.560 05:09:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:07.560 05:09:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:07.560 05:09:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:07.560 05:09:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:07.560 05:09:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:07.560 05:09:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:07.560 05:09:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:07.560 05:09:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:07.560 05:09:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:07.560 05:09:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:07.560 05:09:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:07.560 05:09:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:07.560 05:09:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:07.818 05:09:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:07.818 05:09:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:07.818 05:09:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:07.818 05:09:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:07.818 05:09:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:07.818 05:09:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:07.818 05:09:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:07.818 05:09:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:07.818 05:09:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:07.818 05:09:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:07.818 05:09:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:08.080 05:09:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:08.080 05:09:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:08.080 05:09:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:08.080 05:09:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:08.080 05:09:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:08.080 05:09:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:08.080 05:09:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:08.080 05:09:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:08.080 05:09:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:08.080 05:09:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:08.080 05:09:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:08.080 05:09:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:08.080 05:09:01 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:08.080 05:09:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:08.080 05:09:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:08.080 05:09:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:08.341 malloc_lvol_verify 00:07:08.341 05:09:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:08.602 bb290d2b-40ae-4a3d-81fa-90d5809d9ee9 00:07:08.602 05:09:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:08.602 ef27f5ea-f1a4-4788-8f29-49ee5274baff 00:07:08.602 05:09:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:08.861 /dev/nbd0 00:07:08.861 05:09:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:08.861 05:09:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:08.861 05:09:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:08.861 05:09:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:08.861 05:09:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:08.861 mke2fs 1.47.0 (5-Feb-2023) 00:07:08.861 Discarding device blocks: 0/4096 done 00:07:08.861 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:08.861 00:07:08.861 Allocating group tables: 0/1 done 00:07:08.861 Writing inode tables: 0/1 done 00:07:08.861 Creating journal (1024 blocks): done 00:07:08.861 Writing superblocks and filesystem accounting information: 0/1 done 00:07:08.861 00:07:08.861 05:09:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:08.861 05:09:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:08.861 05:09:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:08.861 05:09:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:08.861 05:09:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:08.861 05:09:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:08.861 05:09:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:09.122 05:09:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:09.122 05:09:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:09.122 05:09:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:09.122 05:09:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:09.122 05:09:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:09.122 05:09:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:09.122 05:09:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:09.122 05:09:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:09.122 05:09:02 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 72318 00:07:09.122 05:09:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 72318 ']' 00:07:09.122 05:09:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 72318 00:07:09.122 05:09:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:07:09.122 05:09:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:09.122 05:09:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72318 00:07:09.122 killing process with pid 72318 00:07:09.122 05:09:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:09.122 05:09:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:09.122 05:09:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72318' 00:07:09.122 05:09:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 72318 00:07:09.122 05:09:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 72318 00:07:09.382 05:09:02 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:09.382 00:07:09.382 real 0m8.882s 00:07:09.382 user 0m13.091s 00:07:09.382 sys 0m2.889s 00:07:09.382 05:09:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:09.382 ************************************ 00:07:09.382 END TEST bdev_nbd 00:07:09.382 ************************************ 00:07:09.382 05:09:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:09.382 05:09:02 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:07:09.382 05:09:02 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:07:09.382 skipping fio tests on NVMe due to multi-ns failures. 00:07:09.382 05:09:02 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:09.382 05:09:02 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:09.382 05:09:02 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:09.382 05:09:02 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:07:09.382 05:09:02 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:09.382 05:09:02 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:09.382 ************************************ 00:07:09.382 START TEST bdev_verify 00:07:09.382 ************************************ 00:07:09.382 05:09:02 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:09.382 [2024-11-10 05:09:02.534979] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:09.382 [2024-11-10 05:09:02.535104] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72684 ] 00:07:09.643 [2024-11-10 05:09:02.683797] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:09.643 [2024-11-10 05:09:02.716604] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.643 [2024-11-10 05:09:02.716632] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:09.903 Running I/O for 5 seconds... 00:07:12.220 23168.00 IOPS, 90.50 MiB/s [2024-11-10T05:09:06.399Z] 23488.00 IOPS, 91.75 MiB/s [2024-11-10T05:09:07.342Z] 22677.33 IOPS, 88.58 MiB/s [2024-11-10T05:09:08.294Z] 22848.00 IOPS, 89.25 MiB/s [2024-11-10T05:09:08.294Z] 22758.40 IOPS, 88.90 MiB/s 00:07:15.058 Latency(us) 00:07:15.058 [2024-11-10T05:09:08.294Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:15.058 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:15.058 Verification LBA range: start 0x0 length 0xbd0bd 00:07:15.058 Nvme0n1 : 5.03 1881.24 7.35 0.00 0.00 67814.72 13611.32 66947.54 00:07:15.058 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:15.058 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:15.058 Nvme0n1 : 5.04 1854.65 7.24 0.00 0.00 68777.60 13409.67 68157.44 00:07:15.058 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:15.058 Verification LBA range: start 0x0 length 0xa0000 00:07:15.058 Nvme1n1 : 5.04 1880.71 7.35 0.00 0.00 67736.43 15022.87 58881.58 00:07:15.058 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:15.058 Verification LBA range: start 0xa0000 length 0xa0000 00:07:15.058 Nvme1n1 : 5.04 1854.15 7.24 0.00 0.00 68650.07 15325.34 61704.66 00:07:15.058 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:15.058 Verification LBA range: start 0x0 length 0x80000 00:07:15.058 Nvme2n1 : 5.06 1885.92 7.37 0.00 0.00 67408.86 6276.33 55655.19 00:07:15.058 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:15.058 Verification LBA range: start 0x80000 length 0x80000 00:07:15.058 Nvme2n1 : 5.06 1859.42 7.26 0.00 0.00 68324.54 4688.34 58478.28 00:07:15.058 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:15.058 Verification LBA range: start 0x0 length 0x80000 00:07:15.058 Nvme2n2 : 5.07 1895.04 7.40 0.00 0.00 67068.28 6604.01 55655.19 00:07:15.058 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:15.058 Verification LBA range: start 0x80000 length 0x80000 00:07:15.058 Nvme2n2 : 5.07 1868.94 7.30 0.00 0.00 67954.13 6175.51 58881.58 00:07:15.058 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:15.058 Verification LBA range: start 0x0 length 0x80000 00:07:15.058 Nvme2n3 : 5.07 1894.50 7.40 0.00 0.00 66940.10 6856.07 58074.98 00:07:15.058 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:15.058 Verification LBA range: start 0x80000 length 0x80000 00:07:15.058 Nvme2n3 : 5.07 1868.05 7.30 0.00 0.00 67832.18 7612.26 60898.07 00:07:15.058 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:15.058 Verification LBA range: start 0x0 length 0x20000 00:07:15.058 Nvme3n1 : 5.07 1893.94 7.40 0.00 0.00 66822.41 7461.02 59284.87 00:07:15.058 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:15.058 Verification LBA range: start 0x20000 length 0x20000 00:07:15.058 Nvme3n1 : 5.07 1867.53 7.30 0.00 0.00 67717.45 8065.97 64124.46 00:07:15.058 [2024-11-10T05:09:08.294Z] =================================================================================================================== 00:07:15.058 [2024-11-10T05:09:08.294Z] Total : 22504.09 87.91 0.00 0.00 67748.49 4688.34 68157.44 00:07:16.001 00:07:16.001 real 0m6.456s 00:07:16.001 user 0m12.220s 00:07:16.001 sys 0m0.200s 00:07:16.001 05:09:08 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:16.001 05:09:08 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:16.001 ************************************ 00:07:16.001 END TEST bdev_verify 00:07:16.001 ************************************ 00:07:16.001 05:09:08 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:16.001 05:09:08 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:07:16.001 05:09:08 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:16.001 05:09:08 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:16.001 ************************************ 00:07:16.001 START TEST bdev_verify_big_io 00:07:16.001 ************************************ 00:07:16.001 05:09:08 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:16.001 [2024-11-10 05:09:09.027903] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:16.001 [2024-11-10 05:09:09.028023] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72777 ] 00:07:16.001 [2024-11-10 05:09:09.175870] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:16.001 [2024-11-10 05:09:09.208225] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:16.001 [2024-11-10 05:09:09.208341] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.574 Running I/O for 5 seconds... 00:07:22.159 983.00 IOPS, 61.44 MiB/s [2024-11-10T05:09:15.961Z] 2218.00 IOPS, 138.62 MiB/s [2024-11-10T05:09:15.961Z] 2889.67 IOPS, 180.60 MiB/s 00:07:22.725 Latency(us) 00:07:22.725 [2024-11-10T05:09:15.961Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:22.725 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:22.725 Verification LBA range: start 0x0 length 0xbd0b 00:07:22.725 Nvme0n1 : 5.75 113.06 7.07 0.00 0.00 1061569.58 19761.62 1096971.82 00:07:22.725 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:22.725 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:22.725 Nvme0n1 : 5.78 88.56 5.53 0.00 0.00 1380022.25 14216.27 1587382.74 00:07:22.725 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:22.725 Verification LBA range: start 0x0 length 0xa000 00:07:22.725 Nvme1n1 : 5.75 114.91 7.18 0.00 0.00 1019593.42 107277.39 916294.10 00:07:22.725 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:22.725 Verification LBA range: start 0xa000 length 0xa000 00:07:22.725 Nvme1n1 : 5.78 88.53 5.53 0.00 0.00 1327766.45 154060.01 1393799.48 00:07:22.725 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:22.725 Verification LBA range: start 0x0 length 0x8000 00:07:22.725 Nvme2n1 : 5.90 126.34 7.90 0.00 0.00 921622.13 35691.91 929199.66 00:07:22.725 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:22.725 Verification LBA range: start 0x8000 length 0x8000 00:07:22.725 Nvme2n1 : 5.97 96.44 6.03 0.00 0.00 1170974.63 66544.25 1206669.00 00:07:22.725 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:22.725 Verification LBA range: start 0x0 length 0x8000 00:07:22.725 Nvme2n2 : 5.91 125.47 7.84 0.00 0.00 895218.22 35691.91 1025991.29 00:07:22.725 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:22.725 Verification LBA range: start 0x8000 length 0x8000 00:07:22.725 Nvme2n2 : 6.02 106.35 6.65 0.00 0.00 1008169.12 21072.34 1219574.55 00:07:22.725 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:22.725 Verification LBA range: start 0x0 length 0x8000 00:07:22.725 Nvme2n3 : 5.91 129.99 8.12 0.00 0.00 841750.32 61704.66 980821.86 00:07:22.725 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:22.725 Verification LBA range: start 0x8000 length 0x8000 00:07:22.725 Nvme2n3 : 6.13 146.07 9.13 0.00 0.00 709005.78 11393.18 1251838.42 00:07:22.725 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:22.725 Verification LBA range: start 0x0 length 0x2000 00:07:22.725 Nvme3n1 : 5.97 146.18 9.14 0.00 0.00 728599.92 1978.68 1006632.96 00:07:22.725 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:22.725 Verification LBA range: start 0x2000 length 0x2000 00:07:22.725 Nvme3n1 : 6.31 247.71 15.48 0.00 0.00 398378.04 115.79 2542393.50 00:07:22.725 [2024-11-10T05:09:15.961Z] =================================================================================================================== 00:07:22.725 [2024-11-10T05:09:15.961Z] Total : 1529.60 95.60 0.00 0.00 871295.11 115.79 2542393.50 00:07:23.661 00:07:23.661 real 0m7.879s 00:07:23.661 user 0m15.014s 00:07:23.661 sys 0m0.219s 00:07:23.661 05:09:16 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:23.661 05:09:16 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:23.661 ************************************ 00:07:23.661 END TEST bdev_verify_big_io 00:07:23.661 ************************************ 00:07:23.661 05:09:16 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:23.661 05:09:16 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:23.661 05:09:16 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:23.661 05:09:16 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:23.661 ************************************ 00:07:23.661 START TEST bdev_write_zeroes 00:07:23.661 ************************************ 00:07:23.661 05:09:16 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:23.919 [2024-11-10 05:09:16.951734] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:23.919 [2024-11-10 05:09:16.951845] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72875 ] 00:07:23.919 [2024-11-10 05:09:17.100126] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.919 [2024-11-10 05:09:17.143538] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.485 Running I/O for 1 seconds... 00:07:25.416 73728.00 IOPS, 288.00 MiB/s 00:07:25.416 Latency(us) 00:07:25.416 [2024-11-10T05:09:18.652Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:25.416 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:25.416 Nvme0n1 : 1.02 12234.17 47.79 0.00 0.00 10441.95 8872.57 19862.45 00:07:25.416 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:25.416 Nvme1n1 : 1.02 12220.16 47.74 0.00 0.00 10439.24 9023.80 19358.33 00:07:25.416 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:25.416 Nvme2n1 : 1.02 12206.25 47.68 0.00 0.00 10430.98 9023.80 18753.38 00:07:25.416 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:25.416 Nvme2n2 : 1.02 12192.38 47.63 0.00 0.00 10419.65 8418.86 18350.08 00:07:25.416 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:25.416 Nvme2n3 : 1.02 12178.65 47.57 0.00 0.00 10416.32 8267.62 18450.90 00:07:25.416 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:25.416 Nvme3n1 : 1.03 12164.95 47.52 0.00 0.00 10405.31 6906.49 20064.10 00:07:25.416 [2024-11-10T05:09:18.652Z] =================================================================================================================== 00:07:25.416 [2024-11-10T05:09:18.652Z] Total : 73196.56 285.92 0.00 0.00 10425.58 6906.49 20064.10 00:07:25.673 00:07:25.673 real 0m1.861s 00:07:25.673 user 0m1.551s 00:07:25.673 sys 0m0.200s 00:07:25.673 05:09:18 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:25.673 ************************************ 00:07:25.673 END TEST bdev_write_zeroes 00:07:25.673 ************************************ 00:07:25.673 05:09:18 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:25.673 05:09:18 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:25.673 05:09:18 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:25.673 05:09:18 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:25.673 05:09:18 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:25.673 ************************************ 00:07:25.673 START TEST bdev_json_nonenclosed 00:07:25.673 ************************************ 00:07:25.673 05:09:18 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:25.673 [2024-11-10 05:09:18.846131] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:25.673 [2024-11-10 05:09:18.846243] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72917 ] 00:07:25.936 [2024-11-10 05:09:18.993809] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.936 [2024-11-10 05:09:19.028407] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.936 [2024-11-10 05:09:19.028492] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:25.936 [2024-11-10 05:09:19.028506] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:25.936 [2024-11-10 05:09:19.028518] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:25.936 00:07:25.936 real 0m0.326s 00:07:25.936 user 0m0.122s 00:07:25.936 sys 0m0.100s 00:07:25.936 05:09:19 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:25.936 ************************************ 00:07:25.936 END TEST bdev_json_nonenclosed 00:07:25.936 ************************************ 00:07:25.936 05:09:19 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:25.936 05:09:19 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:25.936 05:09:19 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:25.936 05:09:19 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:25.936 05:09:19 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:25.936 ************************************ 00:07:25.936 START TEST bdev_json_nonarray 00:07:25.936 ************************************ 00:07:25.936 05:09:19 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:26.194 [2024-11-10 05:09:19.206016] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:26.194 [2024-11-10 05:09:19.206125] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72937 ] 00:07:26.194 [2024-11-10 05:09:19.354897] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.194 [2024-11-10 05:09:19.389494] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.194 [2024-11-10 05:09:19.389580] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:26.194 [2024-11-10 05:09:19.389596] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:26.194 [2024-11-10 05:09:19.389609] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:26.452 00:07:26.452 real 0m0.324s 00:07:26.452 user 0m0.125s 00:07:26.452 sys 0m0.096s 00:07:26.452 05:09:19 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:26.452 05:09:19 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:26.452 ************************************ 00:07:26.452 END TEST bdev_json_nonarray 00:07:26.452 ************************************ 00:07:26.452 05:09:19 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:07:26.452 05:09:19 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:07:26.452 05:09:19 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:07:26.452 05:09:19 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:07:26.452 05:09:19 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:07:26.452 05:09:19 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:26.452 05:09:19 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:26.452 05:09:19 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:07:26.452 05:09:19 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:07:26.452 05:09:19 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:07:26.452 05:09:19 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:07:26.452 00:07:26.452 real 0m30.064s 00:07:26.452 user 0m48.130s 00:07:26.452 sys 0m4.798s 00:07:26.452 05:09:19 blockdev_nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:26.452 05:09:19 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:26.452 ************************************ 00:07:26.452 END TEST blockdev_nvme 00:07:26.452 ************************************ 00:07:26.452 05:09:19 -- spdk/autotest.sh@209 -- # uname -s 00:07:26.452 05:09:19 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:07:26.452 05:09:19 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:26.452 05:09:19 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:26.452 05:09:19 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:26.452 05:09:19 -- common/autotest_common.sh@10 -- # set +x 00:07:26.452 ************************************ 00:07:26.452 START TEST blockdev_nvme_gpt 00:07:26.452 ************************************ 00:07:26.452 05:09:19 blockdev_nvme_gpt -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:26.452 * Looking for test storage... 00:07:26.452 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:26.452 05:09:19 blockdev_nvme_gpt -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:26.452 05:09:19 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # lcov --version 00:07:26.452 05:09:19 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:26.452 05:09:19 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:26.452 05:09:19 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:26.452 05:09:19 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:26.452 05:09:19 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:26.452 05:09:19 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:07:26.452 05:09:19 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:07:26.452 05:09:19 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:07:26.452 05:09:19 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:07:26.452 05:09:19 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:07:26.452 05:09:19 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:07:26.452 05:09:19 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:07:26.452 05:09:19 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:26.452 05:09:19 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:07:26.452 05:09:19 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:07:26.452 05:09:19 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:26.452 05:09:19 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:26.452 05:09:19 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:07:26.452 05:09:19 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:07:26.452 05:09:19 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:26.452 05:09:19 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:07:26.452 05:09:19 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:07:26.452 05:09:19 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:07:26.452 05:09:19 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:07:26.452 05:09:19 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:26.452 05:09:19 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:07:26.452 05:09:19 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:07:26.452 05:09:19 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:26.452 05:09:19 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:26.452 05:09:19 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:07:26.452 05:09:19 blockdev_nvme_gpt -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:26.452 05:09:19 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:26.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.452 --rc genhtml_branch_coverage=1 00:07:26.452 --rc genhtml_function_coverage=1 00:07:26.452 --rc genhtml_legend=1 00:07:26.452 --rc geninfo_all_blocks=1 00:07:26.452 --rc geninfo_unexecuted_blocks=1 00:07:26.452 00:07:26.452 ' 00:07:26.453 05:09:19 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:26.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.453 --rc genhtml_branch_coverage=1 00:07:26.453 --rc genhtml_function_coverage=1 00:07:26.453 --rc genhtml_legend=1 00:07:26.453 --rc geninfo_all_blocks=1 00:07:26.453 --rc geninfo_unexecuted_blocks=1 00:07:26.453 00:07:26.453 ' 00:07:26.453 05:09:19 blockdev_nvme_gpt -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:26.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.453 --rc genhtml_branch_coverage=1 00:07:26.453 --rc genhtml_function_coverage=1 00:07:26.453 --rc genhtml_legend=1 00:07:26.453 --rc geninfo_all_blocks=1 00:07:26.453 --rc geninfo_unexecuted_blocks=1 00:07:26.453 00:07:26.453 ' 00:07:26.453 05:09:19 blockdev_nvme_gpt -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:26.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.453 --rc genhtml_branch_coverage=1 00:07:26.453 --rc genhtml_function_coverage=1 00:07:26.453 --rc genhtml_legend=1 00:07:26.453 --rc geninfo_all_blocks=1 00:07:26.453 --rc geninfo_unexecuted_blocks=1 00:07:26.453 00:07:26.453 ' 00:07:26.453 05:09:19 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:26.453 05:09:19 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:07:26.453 05:09:19 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:26.453 05:09:19 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:26.453 05:09:19 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:26.453 05:09:19 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:26.453 05:09:19 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:26.453 05:09:19 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:26.453 05:09:19 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:07:26.710 05:09:19 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:07:26.710 05:09:19 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:07:26.710 05:09:19 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:07:26.710 05:09:19 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:07:26.710 05:09:19 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:07:26.710 05:09:19 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:07:26.710 05:09:19 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:07:26.710 05:09:19 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:07:26.710 05:09:19 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:07:26.710 05:09:19 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:07:26.710 05:09:19 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:07:26.710 05:09:19 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:07:26.710 05:09:19 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:07:26.710 05:09:19 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:07:26.710 05:09:19 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:07:26.710 05:09:19 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=73010 00:07:26.710 05:09:19 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:26.710 05:09:19 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:26.710 05:09:19 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 73010 00:07:26.710 05:09:19 blockdev_nvme_gpt -- common/autotest_common.sh@831 -- # '[' -z 73010 ']' 00:07:26.710 05:09:19 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.710 05:09:19 blockdev_nvme_gpt -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:26.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.710 05:09:19 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.710 05:09:19 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:26.710 05:09:19 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:26.710 [2024-11-10 05:09:19.764083] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:26.710 [2024-11-10 05:09:19.764192] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73010 ] 00:07:26.710 [2024-11-10 05:09:19.912084] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.967 [2024-11-10 05:09:19.946549] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.532 05:09:20 blockdev_nvme_gpt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:27.532 05:09:20 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # return 0 00:07:27.532 05:09:20 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:07:27.532 05:09:20 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:07:27.532 05:09:20 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:27.789 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:27.789 Waiting for block devices as requested 00:07:28.046 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:28.046 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:28.046 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:28.046 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:33.310 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:33.310 05:09:26 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:07:33.310 05:09:26 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:07:33.310 05:09:26 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:07:33.310 05:09:26 blockdev_nvme_gpt -- common/autotest_common.sh@1656 -- # local nvme bdf 00:07:33.310 05:09:26 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:33.310 05:09:26 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:07:33.310 05:09:26 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:07:33.310 05:09:26 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:33.310 05:09:26 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:33.310 05:09:26 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:33.310 05:09:26 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:07:33.310 05:09:26 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:07:33.310 05:09:26 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:33.310 05:09:26 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:33.310 05:09:26 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:33.310 05:09:26 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:07:33.310 05:09:26 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:07:33.310 05:09:26 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:07:33.310 05:09:26 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:33.310 05:09:26 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:33.310 05:09:26 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:07:33.310 05:09:26 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:07:33.310 05:09:26 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:07:33.310 05:09:26 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:33.310 05:09:26 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:33.310 05:09:26 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:07:33.310 05:09:26 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:07:33.310 05:09:26 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:07:33.310 05:09:26 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:33.310 05:09:26 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:33.310 05:09:26 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:07:33.310 05:09:26 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:07:33.310 05:09:26 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:07:33.310 05:09:26 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:33.310 05:09:26 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:33.310 05:09:26 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:07:33.310 05:09:26 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:07:33.310 05:09:26 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:07:33.310 05:09:26 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:33.310 05:09:26 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:07:33.310 05:09:26 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:07:33.310 05:09:26 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:07:33.310 05:09:26 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:07:33.310 05:09:26 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:07:33.310 05:09:26 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:07:33.310 05:09:26 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:07:33.311 05:09:26 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:07:33.311 BYT; 00:07:33.311 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:07:33.311 05:09:26 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:07:33.311 BYT; 00:07:33.311 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:07:33.311 05:09:26 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:07:33.311 05:09:26 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:07:33.311 05:09:26 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:07:33.311 05:09:26 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:07:33.311 05:09:26 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:33.311 05:09:26 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:07:35.839 05:09:28 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:07:35.839 05:09:28 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:07:35.839 05:09:28 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:35.839 05:09:28 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:35.839 05:09:28 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:07:35.839 05:09:28 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:07:35.839 05:09:28 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:35.839 05:09:28 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:07:35.839 05:09:28 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:35.839 05:09:28 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:35.839 05:09:28 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:35.839 05:09:28 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:07:35.839 05:09:28 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:07:35.839 05:09:28 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:35.839 05:09:28 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:35.839 05:09:28 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:07:35.839 05:09:28 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:07:35.839 05:09:28 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:35.839 05:09:28 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:07:35.839 05:09:28 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:35.839 05:09:28 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:35.839 05:09:28 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:35.839 05:09:28 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:07:37.766 The operation has completed successfully. 00:07:37.766 05:09:30 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:07:38.710 The operation has completed successfully. 00:07:38.710 05:09:31 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:38.970 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:39.541 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:39.541 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:39.541 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:39.541 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:39.541 05:09:32 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:07:39.541 05:09:32 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.541 05:09:32 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:39.541 [] 00:07:39.541 05:09:32 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.541 05:09:32 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:07:39.541 05:09:32 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:07:39.541 05:09:32 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:39.541 05:09:32 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:39.802 05:09:32 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:39.802 05:09:32 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.802 05:09:32 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:40.064 05:09:33 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.064 05:09:33 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:07:40.064 05:09:33 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.064 05:09:33 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:40.064 05:09:33 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.064 05:09:33 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:07:40.064 05:09:33 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:07:40.064 05:09:33 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.064 05:09:33 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:40.064 05:09:33 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.064 05:09:33 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:07:40.064 05:09:33 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.064 05:09:33 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:40.064 05:09:33 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.064 05:09:33 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:40.064 05:09:33 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.064 05:09:33 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:40.064 05:09:33 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.064 05:09:33 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:07:40.064 05:09:33 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:07:40.064 05:09:33 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:07:40.064 05:09:33 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.064 05:09:33 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:40.064 05:09:33 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.064 05:09:33 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:07:40.065 05:09:33 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "0744cd50-7491-42ee-b9b5-4704c8c957f0"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "0744cd50-7491-42ee-b9b5-4704c8c957f0",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "c61f1140-b6ad-4fb8-a6a9-613aef38c05d"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c61f1140-b6ad-4fb8-a6a9-613aef38c05d",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "89e0e418-e451-4a33-94a6-1b277ecea4e7"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "89e0e418-e451-4a33-94a6-1b277ecea4e7",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "a9539e1f-89ce-49cc-b6d2-04d14b121622"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a9539e1f-89ce-49cc-b6d2-04d14b121622",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "37ca9f0e-e058-4d3a-bac8-f13374e7c7dc"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "37ca9f0e-e058-4d3a-bac8-f13374e7c7dc",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:40.065 05:09:33 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:07:40.065 05:09:33 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:07:40.065 05:09:33 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:07:40.065 05:09:33 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:07:40.065 05:09:33 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 73010 00:07:40.065 05:09:33 blockdev_nvme_gpt -- common/autotest_common.sh@950 -- # '[' -z 73010 ']' 00:07:40.065 05:09:33 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # kill -0 73010 00:07:40.065 05:09:33 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # uname 00:07:40.065 05:09:33 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:40.065 05:09:33 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73010 00:07:40.065 05:09:33 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:40.065 05:09:33 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:40.065 killing process with pid 73010 00:07:40.065 05:09:33 blockdev_nvme_gpt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73010' 00:07:40.065 05:09:33 blockdev_nvme_gpt -- common/autotest_common.sh@969 -- # kill 73010 00:07:40.065 05:09:33 blockdev_nvme_gpt -- common/autotest_common.sh@974 -- # wait 73010 00:07:40.326 05:09:33 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:40.326 05:09:33 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:40.326 05:09:33 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:07:40.326 05:09:33 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:40.326 05:09:33 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:40.326 ************************************ 00:07:40.326 START TEST bdev_hello_world 00:07:40.326 ************************************ 00:07:40.326 05:09:33 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:40.326 [2024-11-10 05:09:33.548541] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:40.326 [2024-11-10 05:09:33.548654] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73662 ] 00:07:40.587 [2024-11-10 05:09:33.695354] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.587 [2024-11-10 05:09:33.726895] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.159 [2024-11-10 05:09:34.093452] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:41.159 [2024-11-10 05:09:34.093499] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:41.159 [2024-11-10 05:09:34.093516] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:41.159 [2024-11-10 05:09:34.095565] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:41.159 [2024-11-10 05:09:34.096065] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:41.159 [2024-11-10 05:09:34.096094] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:41.159 [2024-11-10 05:09:34.096321] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:41.159 00:07:41.159 [2024-11-10 05:09:34.096352] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:41.159 00:07:41.159 real 0m0.754s 00:07:41.159 user 0m0.499s 00:07:41.159 sys 0m0.152s 00:07:41.159 05:09:34 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:41.159 05:09:34 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:41.159 ************************************ 00:07:41.159 END TEST bdev_hello_world 00:07:41.159 ************************************ 00:07:41.159 05:09:34 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:07:41.159 05:09:34 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:41.159 05:09:34 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:41.159 05:09:34 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:41.159 ************************************ 00:07:41.159 START TEST bdev_bounds 00:07:41.159 ************************************ 00:07:41.159 05:09:34 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:07:41.159 Process bdevio pid: 73692 00:07:41.159 05:09:34 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=73692 00:07:41.159 05:09:34 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:41.159 05:09:34 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 73692' 00:07:41.159 05:09:34 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 73692 00:07:41.159 05:09:34 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 73692 ']' 00:07:41.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.159 05:09:34 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.159 05:09:34 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:41.159 05:09:34 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.159 05:09:34 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:41.159 05:09:34 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:41.159 05:09:34 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:41.159 [2024-11-10 05:09:34.330616] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:41.159 [2024-11-10 05:09:34.330719] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73692 ] 00:07:41.418 [2024-11-10 05:09:34.466793] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:41.418 [2024-11-10 05:09:34.499698] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:41.418 [2024-11-10 05:09:34.500296] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.418 [2024-11-10 05:09:34.500377] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:42.359 05:09:35 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:42.359 05:09:35 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:07:42.359 05:09:35 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:42.359 I/O targets: 00:07:42.359 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:42.359 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:07:42.359 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:07:42.359 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:42.359 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:42.359 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:42.359 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:42.359 00:07:42.359 00:07:42.359 CUnit - A unit testing framework for C - Version 2.1-3 00:07:42.359 http://cunit.sourceforge.net/ 00:07:42.359 00:07:42.359 00:07:42.359 Suite: bdevio tests on: Nvme3n1 00:07:42.359 Test: blockdev write read block ...passed 00:07:42.359 Test: blockdev write zeroes read block ...passed 00:07:42.359 Test: blockdev write zeroes read no split ...passed 00:07:42.359 Test: blockdev write zeroes read split ...passed 00:07:42.359 Test: blockdev write zeroes read split partial ...passed 00:07:42.359 Test: blockdev reset ...[2024-11-10 05:09:35.369107] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:07:42.359 [2024-11-10 05:09:35.370921] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:42.359 passed 00:07:42.359 Test: blockdev write read 8 blocks ...passed 00:07:42.359 Test: blockdev write read size > 128k ...passed 00:07:42.359 Test: blockdev write read invalid size ...passed 00:07:42.359 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:42.359 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:42.359 Test: blockdev write read max offset ...passed 00:07:42.359 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:42.359 Test: blockdev writev readv 8 blocks ...passed 00:07:42.359 Test: blockdev writev readv 30 x 1block ...passed 00:07:42.359 Test: blockdev writev readv block ...passed 00:07:42.359 Test: blockdev writev readv size > 128k ...passed 00:07:42.359 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:42.359 Test: blockdev comparev and writev ...[2024-11-10 05:09:35.376049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bf20e000 len:0x1000 00:07:42.359 [2024-11-10 05:09:35.376165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:42.359 passed 00:07:42.359 Test: blockdev nvme passthru rw ...passed 00:07:42.359 Test: blockdev nvme passthru vendor specific ...[2024-11-10 05:09:35.376865] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:42.359 [2024-11-10 05:09:35.376945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed 00:07:42.359 Test: blockdev nvme admin passthru ... sqhd:001c p:1 m:0 dnr:1 00:07:42.360 passed 00:07:42.360 Test: blockdev copy ...passed 00:07:42.360 Suite: bdevio tests on: Nvme2n3 00:07:42.360 Test: blockdev write read block ...passed 00:07:42.360 Test: blockdev write zeroes read block ...passed 00:07:42.360 Test: blockdev write zeroes read no split ...passed 00:07:42.360 Test: blockdev write zeroes read split ...passed 00:07:42.360 Test: blockdev write zeroes read split partial ...passed 00:07:42.360 Test: blockdev reset ...[2024-11-10 05:09:35.394211] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:42.360 [2024-11-10 05:09:35.395960] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:42.360 passed 00:07:42.360 Test: blockdev write read 8 blocks ...passed 00:07:42.360 Test: blockdev write read size > 128k ...passed 00:07:42.360 Test: blockdev write read invalid size ...passed 00:07:42.360 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:42.360 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:42.360 Test: blockdev write read max offset ...passed 00:07:42.360 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:42.360 Test: blockdev writev readv 8 blocks ...passed 00:07:42.360 Test: blockdev writev readv 30 x 1block ...passed 00:07:42.360 Test: blockdev writev readv block ...passed 00:07:42.360 Test: blockdev writev readv size > 128k ...passed 00:07:42.360 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:42.360 Test: blockdev comparev and writev ...[2024-11-10 05:09:35.401204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bf20a000 len:0x1000 00:07:42.360 [2024-11-10 05:09:35.401309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:42.360 passed 00:07:42.360 Test: blockdev nvme passthru rw ...passed 00:07:42.360 Test: blockdev nvme passthru vendor specific ...[2024-11-10 05:09:35.402102] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:42.360 [2024-11-10 05:09:35.402182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed 00:07:42.360 Test: blockdev nvme admin passthru ... sqhd:001c p:1 m:0 dnr:1 00:07:42.360 passed 00:07:42.360 Test: blockdev copy ...passed 00:07:42.360 Suite: bdevio tests on: Nvme2n2 00:07:42.360 Test: blockdev write read block ...passed 00:07:42.360 Test: blockdev write zeroes read block ...passed 00:07:42.360 Test: blockdev write zeroes read no split ...passed 00:07:42.360 Test: blockdev write zeroes read split ...passed 00:07:42.360 Test: blockdev write zeroes read split partial ...passed 00:07:42.360 Test: blockdev reset ...[2024-11-10 05:09:35.416726] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:42.360 [2024-11-10 05:09:35.418518] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:42.360 passed 00:07:42.360 Test: blockdev write read 8 blocks ...passed 00:07:42.360 Test: blockdev write read size > 128k ...passed 00:07:42.360 Test: blockdev write read invalid size ...passed 00:07:42.360 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:42.360 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:42.360 Test: blockdev write read max offset ...passed 00:07:42.360 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:42.360 Test: blockdev writev readv 8 blocks ...passed 00:07:42.360 Test: blockdev writev readv 30 x 1block ...passed 00:07:42.360 Test: blockdev writev readv block ...passed 00:07:42.360 Test: blockdev writev readv size > 128k ...passed 00:07:42.360 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:42.360 Test: blockdev comparev and writev ...[2024-11-10 05:09:35.424079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d3205000 len:0x1000 00:07:42.360 [2024-11-10 05:09:35.424117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:42.360 passed 00:07:42.360 Test: blockdev nvme passthru rw ...passed 00:07:42.360 Test: blockdev nvme passthru vendor specific ...[2024-11-10 05:09:35.424769] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:42.360 [2024-11-10 05:09:35.424791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:42.360 passed 00:07:42.360 Test: blockdev nvme admin passthru ...passed 00:07:42.360 Test: blockdev copy ...passed 00:07:42.360 Suite: bdevio tests on: Nvme2n1 00:07:42.360 Test: blockdev write read block ...passed 00:07:42.360 Test: blockdev write zeroes read block ...passed 00:07:42.360 Test: blockdev write zeroes read no split ...passed 00:07:42.360 Test: blockdev write zeroes read split ...passed 00:07:42.360 Test: blockdev write zeroes read split partial ...passed 00:07:42.360 Test: blockdev reset ...[2024-11-10 05:09:35.436324] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:42.360 [2024-11-10 05:09:35.438158] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:42.360 passed 00:07:42.360 Test: blockdev write read 8 blocks ...passed 00:07:42.360 Test: blockdev write read size > 128k ...passed 00:07:42.360 Test: blockdev write read invalid size ...passed 00:07:42.360 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:42.360 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:42.360 Test: blockdev write read max offset ...passed 00:07:42.360 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:42.360 Test: blockdev writev readv 8 blocks ...passed 00:07:42.360 Test: blockdev writev readv 30 x 1block ...passed 00:07:42.360 Test: blockdev writev readv block ...passed 00:07:42.360 Test: blockdev writev readv size > 128k ...passed 00:07:42.360 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:42.360 Test: blockdev comparev and writev ...[2024-11-10 05:09:35.445172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bee02000 len:0x1000 00:07:42.360 [2024-11-10 05:09:35.445291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:42.360 passed 00:07:42.360 Test: blockdev nvme passthru rw ...passed 00:07:42.360 Test: blockdev nvme passthru vendor specific ...[2024-11-10 05:09:35.446102] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:42.360 [2024-11-10 05:09:35.446185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:42.360 passed 00:07:42.360 Test: blockdev nvme admin passthru ...passed 00:07:42.360 Test: blockdev copy ...passed 00:07:42.360 Suite: bdevio tests on: Nvme1n1p2 00:07:42.360 Test: blockdev write read block ...passed 00:07:42.360 Test: blockdev write zeroes read block ...passed 00:07:42.360 Test: blockdev write zeroes read no split ...passed 00:07:42.360 Test: blockdev write zeroes read split ...passed 00:07:42.360 Test: blockdev write zeroes read split partial ...passed 00:07:42.360 Test: blockdev reset ...[2024-11-10 05:09:35.459560] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:07:42.360 [2024-11-10 05:09:35.460964] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:42.360 passed 00:07:42.360 Test: blockdev write read 8 blocks ...passed 00:07:42.360 Test: blockdev write read size > 128k ...passed 00:07:42.360 Test: blockdev write read invalid size ...passed 00:07:42.360 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:42.360 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:42.360 Test: blockdev write read max offset ...passed 00:07:42.360 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:42.360 Test: blockdev writev readv 8 blocks ...passed 00:07:42.360 Test: blockdev writev readv 30 x 1block ...passed 00:07:42.360 Test: blockdev writev readv block ...passed 00:07:42.360 Test: blockdev writev readv size > 128k ...passed 00:07:42.360 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:42.360 Test: blockdev comparev and writev ...[2024-11-10 05:09:35.465241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2d643b000 len:0x1000 00:07:42.360 [2024-11-10 05:09:35.465278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:42.360 passed 00:07:42.360 Test: blockdev nvme passthru rw ...passed 00:07:42.360 Test: blockdev nvme passthru vendor specific ...passed 00:07:42.360 Test: blockdev nvme admin passthru ...passed 00:07:42.360 Test: blockdev copy ...passed 00:07:42.360 Suite: bdevio tests on: Nvme1n1p1 00:07:42.360 Test: blockdev write read block ...passed 00:07:42.360 Test: blockdev write zeroes read block ...passed 00:07:42.360 Test: blockdev write zeroes read no split ...passed 00:07:42.360 Test: blockdev write zeroes read split ...passed 00:07:42.360 Test: blockdev write zeroes read split partial ...passed 00:07:42.360 Test: blockdev reset ...[2024-11-10 05:09:35.475755] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:07:42.360 [2024-11-10 05:09:35.477166] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:42.360 passed 00:07:42.360 Test: blockdev write read 8 blocks ...passed 00:07:42.360 Test: blockdev write read size > 128k ...passed 00:07:42.360 Test: blockdev write read invalid size ...passed 00:07:42.360 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:42.360 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:42.360 Test: blockdev write read max offset ...passed 00:07:42.360 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:42.360 Test: blockdev writev readv 8 blocks ...passed 00:07:42.361 Test: blockdev writev readv 30 x 1block ...passed 00:07:42.361 Test: blockdev writev readv block ...passed 00:07:42.361 Test: blockdev writev readv size > 128k ...passed 00:07:42.361 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:42.361 Test: blockdev comparev and writev ...[2024-11-10 05:09:35.480862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2d6437000 len:0x1000 00:07:42.361 [2024-11-10 05:09:35.480898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:42.361 passed 00:07:42.361 Test: blockdev nvme passthru rw ...passed 00:07:42.361 Test: blockdev nvme passthru vendor specific ...passed 00:07:42.361 Test: blockdev nvme admin passthru ...passed 00:07:42.361 Test: blockdev copy ...passed 00:07:42.361 Suite: bdevio tests on: Nvme0n1 00:07:42.361 Test: blockdev write read block ...passed 00:07:42.361 Test: blockdev write zeroes read block ...passed 00:07:42.361 Test: blockdev write zeroes read no split ...passed 00:07:42.361 Test: blockdev write zeroes read split ...passed 00:07:42.361 Test: blockdev write zeroes read split partial ...passed 00:07:42.361 Test: blockdev reset ...[2024-11-10 05:09:35.490611] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:07:42.361 [2024-11-10 05:09:35.492028] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:42.361 passed 00:07:42.361 Test: blockdev write read 8 blocks ...passed 00:07:42.361 Test: blockdev write read size > 128k ...passed 00:07:42.361 Test: blockdev write read invalid size ...passed 00:07:42.361 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:42.361 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:42.361 Test: blockdev write read max offset ...passed 00:07:42.361 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:42.361 Test: blockdev writev readv 8 blocks ...passed 00:07:42.361 Test: blockdev writev readv 30 x 1block ...passed 00:07:42.361 Test: blockdev writev readv block ...passed 00:07:42.361 Test: blockdev writev readv size > 128k ...passed 00:07:42.361 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:42.361 Test: blockdev comparev and writev ...passed 00:07:42.361 Test: blockdev nvme passthru rw ...[2024-11-10 05:09:35.495075] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:42.361 separate metadata which is not supported yet. 00:07:42.361 passed 00:07:42.361 Test: blockdev nvme passthru vendor specific ...[2024-11-10 05:09:35.495378] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:42.361 [2024-11-10 05:09:35.495412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:42.361 passed 00:07:42.361 Test: blockdev nvme admin passthru ...passed 00:07:42.361 Test: blockdev copy ...passed 00:07:42.361 00:07:42.361 Run Summary: Type Total Ran Passed Failed Inactive 00:07:42.361 suites 7 7 n/a 0 0 00:07:42.361 tests 161 161 161 0 0 00:07:42.361 asserts 1025 1025 1025 0 n/a 00:07:42.361 00:07:42.361 Elapsed time = 0.342 seconds 00:07:42.361 0 00:07:42.361 05:09:35 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 73692 00:07:42.361 05:09:35 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 73692 ']' 00:07:42.361 05:09:35 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 73692 00:07:42.361 05:09:35 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:07:42.361 05:09:35 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:42.361 05:09:35 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73692 00:07:42.361 05:09:35 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:42.361 05:09:35 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:42.361 killing process with pid 73692 00:07:42.361 05:09:35 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73692' 00:07:42.361 05:09:35 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@969 -- # kill 73692 00:07:42.361 05:09:35 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@974 -- # wait 73692 00:07:42.621 05:09:35 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:42.621 00:07:42.621 real 0m1.411s 00:07:42.621 user 0m3.724s 00:07:42.621 sys 0m0.261s 00:07:42.621 05:09:35 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:42.621 05:09:35 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:42.621 ************************************ 00:07:42.621 END TEST bdev_bounds 00:07:42.621 ************************************ 00:07:42.621 05:09:35 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:42.621 05:09:35 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:42.621 05:09:35 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:42.621 05:09:35 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:42.621 ************************************ 00:07:42.621 START TEST bdev_nbd 00:07:42.621 ************************************ 00:07:42.621 05:09:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:42.621 05:09:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:42.621 05:09:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:42.621 05:09:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:42.621 05:09:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:42.621 05:09:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:42.621 05:09:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:42.621 05:09:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:07:42.621 05:09:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:42.621 05:09:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:42.621 05:09:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:42.621 05:09:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:07:42.621 05:09:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:42.621 05:09:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:42.621 05:09:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:42.621 05:09:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:42.621 05:09:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=73736 00:07:42.621 05:09:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:42.621 05:09:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 73736 /var/tmp/spdk-nbd.sock 00:07:42.621 05:09:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 73736 ']' 00:07:42.621 05:09:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:42.621 05:09:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:42.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:42.622 05:09:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:42.622 05:09:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:42.622 05:09:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:42.622 05:09:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:42.622 [2024-11-10 05:09:35.798502] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:42.622 [2024-11-10 05:09:35.798619] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:42.881 [2024-11-10 05:09:35.947400] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.881 [2024-11-10 05:09:35.978551] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.452 05:09:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:43.452 05:09:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:07:43.452 05:09:36 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:43.452 05:09:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:43.452 05:09:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:43.452 05:09:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:43.452 05:09:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:43.452 05:09:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:43.452 05:09:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:43.452 05:09:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:43.452 05:09:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:43.452 05:09:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:43.452 05:09:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:43.452 05:09:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:43.452 05:09:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:43.712 05:09:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:43.712 05:09:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:43.712 05:09:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:43.712 05:09:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:43.712 05:09:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:43.712 05:09:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:43.712 05:09:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:43.712 05:09:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:43.712 05:09:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:43.712 05:09:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:43.712 05:09:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:43.712 05:09:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:43.712 1+0 records in 00:07:43.712 1+0 records out 00:07:43.712 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000361415 s, 11.3 MB/s 00:07:43.712 05:09:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:43.712 05:09:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:43.712 05:09:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:43.712 05:09:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:43.712 05:09:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:43.712 05:09:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:43.712 05:09:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:43.713 05:09:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:07:43.972 05:09:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:43.973 05:09:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:43.973 05:09:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:43.973 05:09:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:43.973 05:09:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:43.973 05:09:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:43.973 05:09:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:43.973 05:09:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:43.973 05:09:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:43.973 05:09:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:43.973 05:09:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:43.973 05:09:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:43.973 1+0 records in 00:07:43.973 1+0 records out 00:07:43.973 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000405632 s, 10.1 MB/s 00:07:43.973 05:09:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:43.973 05:09:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:43.973 05:09:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:43.973 05:09:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:43.973 05:09:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:43.973 05:09:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:43.973 05:09:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:43.973 05:09:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:07:44.233 05:09:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:44.233 05:09:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:44.233 05:09:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:44.233 05:09:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:07:44.233 05:09:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:44.233 05:09:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:44.233 05:09:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:44.233 05:09:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:07:44.233 05:09:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:44.233 05:09:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:44.233 05:09:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:44.233 05:09:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:44.233 1+0 records in 00:07:44.233 1+0 records out 00:07:44.233 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000365801 s, 11.2 MB/s 00:07:44.233 05:09:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:44.233 05:09:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:44.233 05:09:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:44.233 05:09:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:44.233 05:09:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:44.233 05:09:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:44.233 05:09:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:44.233 05:09:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:44.493 05:09:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:44.493 05:09:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:44.493 05:09:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:44.493 05:09:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:07:44.493 05:09:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:44.493 05:09:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:44.493 05:09:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:44.493 05:09:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:07:44.493 05:09:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:44.493 05:09:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:44.493 05:09:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:44.493 05:09:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:44.493 1+0 records in 00:07:44.493 1+0 records out 00:07:44.493 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000389317 s, 10.5 MB/s 00:07:44.493 05:09:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:44.493 05:09:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:44.493 05:09:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:44.493 05:09:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:44.493 05:09:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:44.493 05:09:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:44.493 05:09:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:44.493 05:09:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:44.753 05:09:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:44.753 05:09:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:44.753 05:09:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:44.753 05:09:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:07:44.753 05:09:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:44.753 05:09:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:44.753 05:09:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:44.753 05:09:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:07:44.753 05:09:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:44.753 05:09:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:44.753 05:09:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:44.753 05:09:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:44.753 1+0 records in 00:07:44.753 1+0 records out 00:07:44.753 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000321907 s, 12.7 MB/s 00:07:44.753 05:09:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:44.753 05:09:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:44.753 05:09:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:44.753 05:09:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:44.753 05:09:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:44.753 05:09:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:44.753 05:09:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:44.753 05:09:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:44.753 05:09:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:44.753 05:09:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:44.753 05:09:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:44.753 05:09:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:07:44.753 05:09:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:44.753 05:09:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:44.753 05:09:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:44.753 05:09:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:07:44.753 05:09:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:44.753 05:09:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:44.753 05:09:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:44.753 05:09:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:44.753 1+0 records in 00:07:44.753 1+0 records out 00:07:44.753 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000516735 s, 7.9 MB/s 00:07:45.013 05:09:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:45.013 05:09:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:45.013 05:09:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:45.013 05:09:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:45.013 05:09:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:45.013 05:09:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:45.013 05:09:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:45.013 05:09:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:45.013 05:09:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:07:45.013 05:09:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:07:45.013 05:09:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:07:45.013 05:09:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd6 00:07:45.013 05:09:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:45.013 05:09:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:45.013 05:09:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:45.013 05:09:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd6 /proc/partitions 00:07:45.013 05:09:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:45.013 05:09:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:45.013 05:09:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:45.013 05:09:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:45.013 1+0 records in 00:07:45.013 1+0 records out 00:07:45.013 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000458904 s, 8.9 MB/s 00:07:45.013 05:09:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:45.013 05:09:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:45.013 05:09:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:45.013 05:09:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:45.013 05:09:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:45.013 05:09:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:45.013 05:09:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:45.013 05:09:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:45.272 05:09:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:45.272 { 00:07:45.272 "nbd_device": "/dev/nbd0", 00:07:45.272 "bdev_name": "Nvme0n1" 00:07:45.272 }, 00:07:45.272 { 00:07:45.272 "nbd_device": "/dev/nbd1", 00:07:45.272 "bdev_name": "Nvme1n1p1" 00:07:45.272 }, 00:07:45.272 { 00:07:45.272 "nbd_device": "/dev/nbd2", 00:07:45.272 "bdev_name": "Nvme1n1p2" 00:07:45.272 }, 00:07:45.272 { 00:07:45.272 "nbd_device": "/dev/nbd3", 00:07:45.272 "bdev_name": "Nvme2n1" 00:07:45.272 }, 00:07:45.272 { 00:07:45.272 "nbd_device": "/dev/nbd4", 00:07:45.272 "bdev_name": "Nvme2n2" 00:07:45.272 }, 00:07:45.272 { 00:07:45.272 "nbd_device": "/dev/nbd5", 00:07:45.272 "bdev_name": "Nvme2n3" 00:07:45.272 }, 00:07:45.272 { 00:07:45.272 "nbd_device": "/dev/nbd6", 00:07:45.272 "bdev_name": "Nvme3n1" 00:07:45.272 } 00:07:45.272 ]' 00:07:45.272 05:09:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:45.272 05:09:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:45.272 { 00:07:45.272 "nbd_device": "/dev/nbd0", 00:07:45.272 "bdev_name": "Nvme0n1" 00:07:45.272 }, 00:07:45.272 { 00:07:45.272 "nbd_device": "/dev/nbd1", 00:07:45.272 "bdev_name": "Nvme1n1p1" 00:07:45.272 }, 00:07:45.272 { 00:07:45.272 "nbd_device": "/dev/nbd2", 00:07:45.272 "bdev_name": "Nvme1n1p2" 00:07:45.272 }, 00:07:45.272 { 00:07:45.272 "nbd_device": "/dev/nbd3", 00:07:45.272 "bdev_name": "Nvme2n1" 00:07:45.272 }, 00:07:45.272 { 00:07:45.272 "nbd_device": "/dev/nbd4", 00:07:45.272 "bdev_name": "Nvme2n2" 00:07:45.272 }, 00:07:45.272 { 00:07:45.273 "nbd_device": "/dev/nbd5", 00:07:45.273 "bdev_name": "Nvme2n3" 00:07:45.273 }, 00:07:45.273 { 00:07:45.273 "nbd_device": "/dev/nbd6", 00:07:45.273 "bdev_name": "Nvme3n1" 00:07:45.273 } 00:07:45.273 ]' 00:07:45.273 05:09:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:45.273 05:09:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:07:45.273 05:09:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:45.273 05:09:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:07:45.273 05:09:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:45.273 05:09:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:45.273 05:09:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:45.273 05:09:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:45.532 05:09:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:45.532 05:09:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:45.532 05:09:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:45.532 05:09:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:45.532 05:09:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:45.532 05:09:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:45.532 05:09:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:45.532 05:09:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:45.532 05:09:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:45.533 05:09:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:45.792 05:09:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:45.792 05:09:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:45.792 05:09:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:45.792 05:09:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:45.792 05:09:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:45.792 05:09:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:45.792 05:09:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:45.792 05:09:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:45.792 05:09:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:45.792 05:09:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:46.053 05:09:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:46.053 05:09:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:46.053 05:09:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:46.053 05:09:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:46.053 05:09:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:46.053 05:09:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:46.053 05:09:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:46.053 05:09:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:46.053 05:09:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:46.053 05:09:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:46.053 05:09:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:46.053 05:09:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:46.053 05:09:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:46.053 05:09:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:46.053 05:09:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:46.053 05:09:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:46.053 05:09:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:46.053 05:09:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:46.053 05:09:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:46.053 05:09:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:46.314 05:09:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:46.314 05:09:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:46.314 05:09:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:46.314 05:09:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:46.314 05:09:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:46.314 05:09:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:46.314 05:09:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:46.314 05:09:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:46.314 05:09:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:46.314 05:09:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:46.574 05:09:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:46.574 05:09:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:46.574 05:09:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:46.574 05:09:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:46.574 05:09:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:46.574 05:09:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:46.574 05:09:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:46.574 05:09:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:46.574 05:09:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:46.574 05:09:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:07:46.833 05:09:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:07:46.833 05:09:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:07:46.833 05:09:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:07:46.833 05:09:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:46.833 05:09:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:46.833 05:09:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:07:46.833 05:09:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:46.833 05:09:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:46.833 05:09:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:46.833 05:09:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:46.833 05:09:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:47.092 05:09:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:47.092 05:09:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:47.092 05:09:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:47.092 05:09:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:47.092 05:09:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:47.092 05:09:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:47.092 05:09:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:47.092 05:09:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:47.092 05:09:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:47.092 05:09:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:47.092 05:09:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:47.092 05:09:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:47.092 05:09:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:47.092 05:09:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:47.092 05:09:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:47.092 05:09:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:47.092 05:09:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:47.092 05:09:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:47.092 05:09:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:47.092 05:09:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:47.092 05:09:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:47.092 05:09:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:47.092 05:09:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:47.092 05:09:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:47.092 05:09:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:47.092 05:09:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:47.092 05:09:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:47.092 05:09:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:47.353 /dev/nbd0 00:07:47.353 05:09:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:47.353 05:09:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:47.353 05:09:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:47.353 05:09:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:47.353 05:09:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:47.353 05:09:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:47.353 05:09:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:47.353 05:09:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:47.353 05:09:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:47.353 05:09:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:47.353 05:09:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:47.353 1+0 records in 00:07:47.353 1+0 records out 00:07:47.353 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000508226 s, 8.1 MB/s 00:07:47.353 05:09:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:47.353 05:09:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:47.353 05:09:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:47.353 05:09:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:47.353 05:09:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:47.353 05:09:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:47.353 05:09:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:47.353 05:09:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:07:47.353 /dev/nbd1 00:07:47.615 05:09:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:47.615 05:09:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:47.615 05:09:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:47.615 05:09:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:47.615 05:09:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:47.615 05:09:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:47.615 05:09:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:47.615 05:09:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:47.615 05:09:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:47.615 05:09:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:47.615 05:09:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:47.615 1+0 records in 00:07:47.615 1+0 records out 00:07:47.615 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000336496 s, 12.2 MB/s 00:07:47.615 05:09:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:47.615 05:09:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:47.615 05:09:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:47.615 05:09:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:47.615 05:09:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:47.615 05:09:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:47.615 05:09:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:47.615 05:09:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:07:47.615 /dev/nbd10 00:07:47.615 05:09:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:47.615 05:09:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:47.615 05:09:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:07:47.615 05:09:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:47.615 05:09:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:47.615 05:09:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:47.615 05:09:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:07:47.615 05:09:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:47.615 05:09:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:47.615 05:09:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:47.615 05:09:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:47.615 1+0 records in 00:07:47.615 1+0 records out 00:07:47.615 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000331393 s, 12.4 MB/s 00:07:47.615 05:09:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:47.615 05:09:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:47.615 05:09:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:47.615 05:09:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:47.615 05:09:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:47.615 05:09:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:47.615 05:09:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:47.615 05:09:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:07:47.877 /dev/nbd11 00:07:47.877 05:09:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:47.877 05:09:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:47.877 05:09:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:07:47.877 05:09:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:47.877 05:09:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:47.877 05:09:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:47.877 05:09:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:07:47.877 05:09:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:47.877 05:09:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:47.877 05:09:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:47.877 05:09:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:47.877 1+0 records in 00:07:47.877 1+0 records out 00:07:47.877 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000467482 s, 8.8 MB/s 00:07:47.877 05:09:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:47.877 05:09:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:47.877 05:09:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:47.877 05:09:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:47.877 05:09:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:47.877 05:09:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:47.877 05:09:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:47.877 05:09:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:07:48.139 /dev/nbd12 00:07:48.139 05:09:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:48.139 05:09:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:48.139 05:09:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:07:48.139 05:09:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:48.139 05:09:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:48.139 05:09:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:48.139 05:09:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:07:48.139 05:09:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:48.139 05:09:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:48.139 05:09:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:48.139 05:09:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:48.139 1+0 records in 00:07:48.139 1+0 records out 00:07:48.139 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00041087 s, 10.0 MB/s 00:07:48.139 05:09:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:48.139 05:09:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:48.139 05:09:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:48.139 05:09:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:48.139 05:09:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:48.139 05:09:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:48.139 05:09:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:48.139 05:09:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:07:48.399 /dev/nbd13 00:07:48.399 05:09:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:48.399 05:09:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:48.399 05:09:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:07:48.399 05:09:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:48.399 05:09:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:48.399 05:09:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:48.399 05:09:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:07:48.399 05:09:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:48.399 05:09:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:48.399 05:09:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:48.399 05:09:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:48.399 1+0 records in 00:07:48.399 1+0 records out 00:07:48.399 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000332504 s, 12.3 MB/s 00:07:48.399 05:09:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:48.399 05:09:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:48.399 05:09:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:48.399 05:09:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:48.399 05:09:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:48.399 05:09:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:48.399 05:09:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:48.399 05:09:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:07:48.660 /dev/nbd14 00:07:48.660 05:09:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:07:48.660 05:09:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:07:48.660 05:09:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd14 00:07:48.660 05:09:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:48.660 05:09:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:48.660 05:09:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:48.660 05:09:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd14 /proc/partitions 00:07:48.660 05:09:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:48.660 05:09:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:48.660 05:09:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:48.660 05:09:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:48.660 1+0 records in 00:07:48.660 1+0 records out 00:07:48.660 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000531768 s, 7.7 MB/s 00:07:48.660 05:09:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:48.660 05:09:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:48.660 05:09:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:48.660 05:09:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:48.660 05:09:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:48.660 05:09:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:48.660 05:09:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:48.660 05:09:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:48.660 05:09:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:48.660 05:09:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:48.921 05:09:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:48.921 { 00:07:48.921 "nbd_device": "/dev/nbd0", 00:07:48.921 "bdev_name": "Nvme0n1" 00:07:48.921 }, 00:07:48.921 { 00:07:48.921 "nbd_device": "/dev/nbd1", 00:07:48.921 "bdev_name": "Nvme1n1p1" 00:07:48.921 }, 00:07:48.921 { 00:07:48.921 "nbd_device": "/dev/nbd10", 00:07:48.921 "bdev_name": "Nvme1n1p2" 00:07:48.921 }, 00:07:48.921 { 00:07:48.921 "nbd_device": "/dev/nbd11", 00:07:48.921 "bdev_name": "Nvme2n1" 00:07:48.921 }, 00:07:48.921 { 00:07:48.921 "nbd_device": "/dev/nbd12", 00:07:48.921 "bdev_name": "Nvme2n2" 00:07:48.921 }, 00:07:48.921 { 00:07:48.921 "nbd_device": "/dev/nbd13", 00:07:48.921 "bdev_name": "Nvme2n3" 00:07:48.921 }, 00:07:48.921 { 00:07:48.921 "nbd_device": "/dev/nbd14", 00:07:48.921 "bdev_name": "Nvme3n1" 00:07:48.921 } 00:07:48.921 ]' 00:07:48.921 05:09:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:48.921 { 00:07:48.921 "nbd_device": "/dev/nbd0", 00:07:48.921 "bdev_name": "Nvme0n1" 00:07:48.921 }, 00:07:48.921 { 00:07:48.921 "nbd_device": "/dev/nbd1", 00:07:48.921 "bdev_name": "Nvme1n1p1" 00:07:48.921 }, 00:07:48.921 { 00:07:48.921 "nbd_device": "/dev/nbd10", 00:07:48.921 "bdev_name": "Nvme1n1p2" 00:07:48.921 }, 00:07:48.921 { 00:07:48.921 "nbd_device": "/dev/nbd11", 00:07:48.921 "bdev_name": "Nvme2n1" 00:07:48.921 }, 00:07:48.921 { 00:07:48.921 "nbd_device": "/dev/nbd12", 00:07:48.921 "bdev_name": "Nvme2n2" 00:07:48.921 }, 00:07:48.921 { 00:07:48.921 "nbd_device": "/dev/nbd13", 00:07:48.921 "bdev_name": "Nvme2n3" 00:07:48.921 }, 00:07:48.921 { 00:07:48.921 "nbd_device": "/dev/nbd14", 00:07:48.921 "bdev_name": "Nvme3n1" 00:07:48.921 } 00:07:48.921 ]' 00:07:48.921 05:09:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:48.921 05:09:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:48.921 /dev/nbd1 00:07:48.921 /dev/nbd10 00:07:48.921 /dev/nbd11 00:07:48.921 /dev/nbd12 00:07:48.921 /dev/nbd13 00:07:48.921 /dev/nbd14' 00:07:48.921 05:09:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:48.921 /dev/nbd1 00:07:48.921 /dev/nbd10 00:07:48.921 /dev/nbd11 00:07:48.921 /dev/nbd12 00:07:48.921 /dev/nbd13 00:07:48.921 /dev/nbd14' 00:07:48.921 05:09:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:48.921 05:09:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:07:48.921 05:09:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:07:48.921 05:09:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:07:48.921 05:09:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:07:48.921 05:09:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:07:48.921 05:09:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:48.921 05:09:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:48.921 05:09:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:48.921 05:09:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:48.921 05:09:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:48.921 05:09:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:48.921 256+0 records in 00:07:48.921 256+0 records out 00:07:48.921 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00685681 s, 153 MB/s 00:07:48.921 05:09:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:48.921 05:09:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:48.921 256+0 records in 00:07:48.921 256+0 records out 00:07:48.921 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0755899 s, 13.9 MB/s 00:07:48.921 05:09:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:48.921 05:09:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:49.182 256+0 records in 00:07:49.182 256+0 records out 00:07:49.182 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0745903 s, 14.1 MB/s 00:07:49.182 05:09:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:49.182 05:09:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:49.182 256+0 records in 00:07:49.182 256+0 records out 00:07:49.182 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0739897 s, 14.2 MB/s 00:07:49.182 05:09:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:49.182 05:09:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:49.182 256+0 records in 00:07:49.182 256+0 records out 00:07:49.182 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0795852 s, 13.2 MB/s 00:07:49.182 05:09:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:49.182 05:09:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:49.182 256+0 records in 00:07:49.182 256+0 records out 00:07:49.182 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0731617 s, 14.3 MB/s 00:07:49.182 05:09:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:49.182 05:09:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:49.443 256+0 records in 00:07:49.443 256+0 records out 00:07:49.443 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0728102 s, 14.4 MB/s 00:07:49.443 05:09:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:49.443 05:09:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:07:49.443 256+0 records in 00:07:49.443 256+0 records out 00:07:49.443 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0762767 s, 13.7 MB/s 00:07:49.443 05:09:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:07:49.443 05:09:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:49.443 05:09:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:49.443 05:09:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:49.443 05:09:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:49.443 05:09:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:49.443 05:09:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:49.443 05:09:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:49.443 05:09:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:49.443 05:09:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:49.443 05:09:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:49.443 05:09:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:49.443 05:09:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:49.443 05:09:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:49.443 05:09:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:49.443 05:09:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:49.443 05:09:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:49.443 05:09:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:49.443 05:09:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:49.443 05:09:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:49.443 05:09:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:07:49.443 05:09:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:49.443 05:09:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:49.444 05:09:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:49.444 05:09:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:49.444 05:09:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:49.444 05:09:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:49.444 05:09:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:49.444 05:09:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:49.705 05:09:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:49.705 05:09:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:49.705 05:09:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:49.705 05:09:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:49.705 05:09:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:49.705 05:09:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:49.705 05:09:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:49.705 05:09:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:49.705 05:09:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:49.705 05:09:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:49.965 05:09:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:49.965 05:09:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:49.965 05:09:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:49.965 05:09:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:49.965 05:09:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:49.965 05:09:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:49.965 05:09:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:49.965 05:09:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:49.965 05:09:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:49.965 05:09:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:50.227 05:09:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:50.227 05:09:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:50.227 05:09:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:50.227 05:09:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:50.227 05:09:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:50.228 05:09:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:50.228 05:09:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:50.228 05:09:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:50.228 05:09:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:50.228 05:09:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:50.228 05:09:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:50.228 05:09:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:50.228 05:09:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:50.228 05:09:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:50.228 05:09:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:50.228 05:09:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:50.228 05:09:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:50.228 05:09:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:50.228 05:09:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:50.228 05:09:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:50.489 05:09:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:50.489 05:09:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:50.489 05:09:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:50.489 05:09:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:50.489 05:09:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:50.490 05:09:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:50.490 05:09:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:50.490 05:09:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:50.490 05:09:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:50.490 05:09:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:50.751 05:09:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:50.751 05:09:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:50.751 05:09:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:50.751 05:09:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:50.751 05:09:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:50.751 05:09:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:50.751 05:09:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:50.751 05:09:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:50.751 05:09:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:50.751 05:09:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:07:51.013 05:09:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:07:51.013 05:09:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:07:51.013 05:09:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:07:51.013 05:09:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:51.013 05:09:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:51.013 05:09:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:07:51.013 05:09:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:51.013 05:09:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:51.013 05:09:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:51.013 05:09:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:51.013 05:09:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:51.275 05:09:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:51.275 05:09:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:51.275 05:09:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:51.275 05:09:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:51.275 05:09:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:51.275 05:09:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:51.275 05:09:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:51.275 05:09:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:51.275 05:09:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:51.275 05:09:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:51.275 05:09:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:51.275 05:09:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:51.275 05:09:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:51.275 05:09:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:51.275 05:09:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:51.275 05:09:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:51.537 malloc_lvol_verify 00:07:51.537 05:09:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:51.537 b71a2b59-fc28-4317-9958-3443260d5f12 00:07:51.537 05:09:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:51.796 611b3013-e0b6-4f20-a4a7-04a6a729b04b 00:07:51.796 05:09:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:52.056 /dev/nbd0 00:07:52.056 05:09:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:52.056 05:09:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:52.056 05:09:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:52.056 05:09:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:52.056 05:09:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:52.056 mke2fs 1.47.0 (5-Feb-2023) 00:07:52.056 Discarding device blocks: 0/4096 done 00:07:52.056 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:52.056 00:07:52.056 Allocating group tables: 0/1 done 00:07:52.056 Writing inode tables: 0/1 done 00:07:52.056 Creating journal (1024 blocks): done 00:07:52.056 Writing superblocks and filesystem accounting information: 0/1 done 00:07:52.056 00:07:52.056 05:09:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:52.056 05:09:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:52.056 05:09:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:52.056 05:09:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:52.056 05:09:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:52.056 05:09:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:52.056 05:09:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:52.317 05:09:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:52.317 05:09:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:52.317 05:09:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:52.317 05:09:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:52.317 05:09:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:52.317 05:09:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:52.317 05:09:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:52.317 05:09:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:52.317 05:09:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 73736 00:07:52.317 05:09:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 73736 ']' 00:07:52.317 05:09:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 73736 00:07:52.317 05:09:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:07:52.317 05:09:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:52.317 05:09:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73736 00:07:52.317 05:09:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:52.317 05:09:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:52.317 killing process with pid 73736 00:07:52.317 05:09:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73736' 00:07:52.317 05:09:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@969 -- # kill 73736 00:07:52.317 05:09:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@974 -- # wait 73736 00:07:52.577 05:09:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:52.577 00:07:52.577 real 0m9.821s 00:07:52.577 user 0m14.365s 00:07:52.577 sys 0m3.333s 00:07:52.577 05:09:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:52.577 05:09:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:52.577 ************************************ 00:07:52.577 END TEST bdev_nbd 00:07:52.577 ************************************ 00:07:52.577 05:09:45 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:07:52.577 05:09:45 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:07:52.577 05:09:45 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:07:52.577 skipping fio tests on NVMe due to multi-ns failures. 00:07:52.577 05:09:45 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:52.577 05:09:45 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:52.577 05:09:45 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:52.577 05:09:45 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:07:52.577 05:09:45 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:52.577 05:09:45 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:52.577 ************************************ 00:07:52.577 START TEST bdev_verify 00:07:52.577 ************************************ 00:07:52.577 05:09:45 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:52.577 [2024-11-10 05:09:45.651168] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:52.577 [2024-11-10 05:09:45.651274] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74136 ] 00:07:52.577 [2024-11-10 05:09:45.799078] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:52.838 [2024-11-10 05:09:45.831000] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.838 [2024-11-10 05:09:45.831060] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:53.099 Running I/O for 5 seconds... 00:07:55.429 23872.00 IOPS, 93.25 MiB/s [2024-11-10T05:09:49.608Z] 27296.00 IOPS, 106.62 MiB/s [2024-11-10T05:09:50.546Z] 26538.67 IOPS, 103.67 MiB/s [2024-11-10T05:09:51.487Z] 25776.00 IOPS, 100.69 MiB/s [2024-11-10T05:09:51.487Z] 25152.00 IOPS, 98.25 MiB/s 00:07:58.251 Latency(us) 00:07:58.251 [2024-11-10T05:09:51.487Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:58.251 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:58.251 Verification LBA range: start 0x0 length 0xbd0bd 00:07:58.251 Nvme0n1 : 5.07 1817.79 7.10 0.00 0.00 70198.14 12250.19 87919.06 00:07:58.251 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:58.251 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:58.251 Nvme0n1 : 5.06 1718.51 6.71 0.00 0.00 74265.24 12502.25 87515.77 00:07:58.251 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:58.251 Verification LBA range: start 0x0 length 0x4ff80 00:07:58.251 Nvme1n1p1 : 5.07 1816.82 7.10 0.00 0.00 70065.95 14216.27 76626.71 00:07:58.251 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:58.251 Verification LBA range: start 0x4ff80 length 0x4ff80 00:07:58.251 Nvme1n1p1 : 5.07 1717.99 6.71 0.00 0.00 74158.43 14014.62 77433.30 00:07:58.251 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:58.251 Verification LBA range: start 0x0 length 0x4ff7f 00:07:58.251 Nvme1n1p2 : 5.07 1816.28 7.09 0.00 0.00 69929.79 15829.46 69367.34 00:07:58.251 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:58.252 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:07:58.252 Nvme1n1p2 : 5.07 1717.48 6.71 0.00 0.00 74022.82 15123.69 70980.53 00:07:58.252 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:58.252 Verification LBA range: start 0x0 length 0x80000 00:07:58.252 Nvme2n1 : 5.08 1815.80 7.09 0.00 0.00 69777.53 15728.64 65737.65 00:07:58.252 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:58.252 Verification LBA range: start 0x80000 length 0x80000 00:07:58.252 Nvme2n1 : 5.07 1717.04 6.71 0.00 0.00 73871.00 16333.59 67754.14 00:07:58.252 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:58.252 Verification LBA range: start 0x0 length 0x80000 00:07:58.252 Nvme2n2 : 5.08 1815.33 7.09 0.00 0.00 69637.19 15123.69 66544.25 00:07:58.252 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:58.252 Verification LBA range: start 0x80000 length 0x80000 00:07:58.252 Nvme2n2 : 5.07 1716.45 6.70 0.00 0.00 73722.50 15728.64 70173.93 00:07:58.252 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:58.252 Verification LBA range: start 0x0 length 0x80000 00:07:58.252 Nvme2n3 : 5.09 1824.25 7.13 0.00 0.00 69201.65 3112.96 67350.84 00:07:58.252 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:58.252 Verification LBA range: start 0x80000 length 0x80000 00:07:58.252 Nvme2n3 : 5.08 1725.54 6.74 0.00 0.00 73213.04 2394.58 72997.02 00:07:58.252 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:58.252 Verification LBA range: start 0x0 length 0x20000 00:07:58.252 Nvme3n1 : 5.10 1833.36 7.16 0.00 0.00 68769.70 7208.96 69770.63 00:07:58.252 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:58.252 Verification LBA range: start 0x20000 length 0x20000 00:07:58.252 Nvme3n1 : 5.09 1734.83 6.78 0.00 0.00 72714.05 6755.25 73803.62 00:07:58.252 [2024-11-10T05:09:51.488Z] =================================================================================================================== 00:07:58.252 [2024-11-10T05:09:51.488Z] Total : 24787.47 96.83 0.00 0.00 71621.78 2394.58 87919.06 00:07:59.198 00:07:59.198 real 0m6.498s 00:07:59.198 user 0m12.295s 00:07:59.198 sys 0m0.186s 00:07:59.198 05:09:52 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:59.198 05:09:52 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:59.198 ************************************ 00:07:59.198 END TEST bdev_verify 00:07:59.198 ************************************ 00:07:59.198 05:09:52 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:59.198 05:09:52 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:07:59.198 05:09:52 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:59.198 05:09:52 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:59.198 ************************************ 00:07:59.198 START TEST bdev_verify_big_io 00:07:59.198 ************************************ 00:07:59.198 05:09:52 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:59.198 [2024-11-10 05:09:52.187466] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:59.198 [2024-11-10 05:09:52.187574] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74228 ] 00:07:59.198 [2024-11-10 05:09:52.331103] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:59.198 [2024-11-10 05:09:52.364064] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.198 [2024-11-10 05:09:52.364092] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:59.770 Running I/O for 5 seconds... 00:08:05.001 612.00 IOPS, 38.25 MiB/s [2024-11-10T05:09:58.237Z] 2610.00 IOPS, 163.12 MiB/s [2024-11-10T05:09:59.179Z] 2252.00 IOPS, 140.75 MiB/s [2024-11-10T05:09:59.179Z] 2659.75 IOPS, 166.23 MiB/s 00:08:05.943 Latency(us) 00:08:05.943 [2024-11-10T05:09:59.179Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:05.943 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:05.943 Verification LBA range: start 0x0 length 0xbd0b 00:08:05.943 Nvme0n1 : 5.75 111.40 6.96 0.00 0.00 1078701.95 13409.67 1387346.71 00:08:05.943 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:05.943 Verification LBA range: start 0xbd0b length 0xbd0b 00:08:05.943 Nvme0n1 : 5.72 117.05 7.32 0.00 0.00 1037398.76 27021.00 1380893.93 00:08:05.943 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:05.943 Verification LBA range: start 0x0 length 0x4ff8 00:08:05.943 Nvme1n1p1 : 5.94 112.31 7.02 0.00 0.00 1053989.09 94371.84 1755154.90 00:08:05.943 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:05.943 Verification LBA range: start 0x4ff8 length 0x4ff8 00:08:05.943 Nvme1n1p1 : 5.79 113.43 7.09 0.00 0.00 1040156.76 95985.03 1497043.89 00:08:05.943 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:05.943 Verification LBA range: start 0x0 length 0x4ff7 00:08:05.943 Nvme1n1p2 : 5.94 110.78 6.92 0.00 0.00 1020920.33 94371.84 1780966.01 00:08:05.943 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:05.943 Verification LBA range: start 0x4ff7 length 0x4ff7 00:08:05.943 Nvme1n1p2 : 5.90 100.42 6.28 0.00 0.00 1138444.11 149220.43 1729343.80 00:08:05.943 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:05.943 Verification LBA range: start 0x0 length 0x8000 00:08:05.943 Nvme2n1 : 5.99 115.81 7.24 0.00 0.00 951363.64 87515.77 1793871.56 00:08:05.943 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:05.943 Verification LBA range: start 0x8000 length 0x8000 00:08:05.943 Nvme2n1 : 5.90 130.23 8.14 0.00 0.00 857145.24 102034.51 1064707.94 00:08:05.943 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:05.943 Verification LBA range: start 0x0 length 0x8000 00:08:05.943 Nvme2n2 : 6.08 123.69 7.73 0.00 0.00 863349.11 53235.40 1819682.66 00:08:05.943 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:05.943 Verification LBA range: start 0x8000 length 0x8000 00:08:05.943 Nvme2n2 : 6.03 138.00 8.62 0.00 0.00 782960.82 59688.17 1077613.49 00:08:05.943 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:05.943 Verification LBA range: start 0x0 length 0x8000 00:08:05.943 Nvme2n3 : 6.26 135.54 8.47 0.00 0.00 762217.21 61704.66 1845493.76 00:08:05.943 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:05.943 Verification LBA range: start 0x8000 length 0x8000 00:08:05.943 Nvme2n3 : 6.17 149.33 9.33 0.00 0.00 702876.64 29844.09 1103424.59 00:08:05.943 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:05.943 Verification LBA range: start 0x0 length 0x2000 00:08:05.944 Nvme3n1 : 6.28 155.04 9.69 0.00 0.00 647860.67 488.37 1871304.86 00:08:05.944 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:05.944 Verification LBA range: start 0x2000 length 0x2000 00:08:05.944 Nvme3n1 : 6.26 169.35 10.58 0.00 0.00 601325.76 2457.60 1129235.69 00:08:05.944 [2024-11-10T05:09:59.180Z] =================================================================================================================== 00:08:05.944 [2024-11-10T05:09:59.180Z] Total : 1782.39 111.40 0.00 0.00 866401.19 488.37 1871304.86 00:08:06.575 00:08:06.575 real 0m7.541s 00:08:06.575 user 0m14.381s 00:08:06.575 sys 0m0.218s 00:08:06.575 05:09:59 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:06.575 05:09:59 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:08:06.575 ************************************ 00:08:06.575 END TEST bdev_verify_big_io 00:08:06.575 ************************************ 00:08:06.575 05:09:59 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:06.575 05:09:59 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:08:06.575 05:09:59 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:06.575 05:09:59 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:06.575 ************************************ 00:08:06.575 START TEST bdev_write_zeroes 00:08:06.575 ************************************ 00:08:06.575 05:09:59 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:06.575 [2024-11-10 05:09:59.795364] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:06.575 [2024-11-10 05:09:59.795476] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74332 ] 00:08:06.838 [2024-11-10 05:09:59.941497] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.838 [2024-11-10 05:09:59.972482] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.410 Running I/O for 1 seconds... 00:08:08.350 62720.00 IOPS, 245.00 MiB/s 00:08:08.350 Latency(us) 00:08:08.350 [2024-11-10T05:10:01.586Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:08.350 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:08.350 Nvme0n1 : 1.03 8863.91 34.62 0.00 0.00 14407.52 7309.78 30247.38 00:08:08.350 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:08.350 Nvme1n1p1 : 1.03 8855.68 34.59 0.00 0.00 14410.62 9830.40 31053.98 00:08:08.350 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:08.350 Nvme1n1p2 : 1.03 8847.38 34.56 0.00 0.00 14398.34 9578.34 30247.38 00:08:08.350 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:08.350 Nvme2n1 : 1.04 8839.82 34.53 0.00 0.00 14382.72 9175.04 29239.14 00:08:08.350 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:08.350 Nvme2n2 : 1.04 8832.29 34.50 0.00 0.00 14361.45 8519.68 28634.19 00:08:08.350 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:08.350 Nvme2n3 : 1.04 8824.76 34.47 0.00 0.00 14358.31 8771.74 29440.79 00:08:08.350 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:08.350 Nvme3n1 : 1.04 8817.19 34.44 0.00 0.00 14350.25 8771.74 30650.68 00:08:08.350 [2024-11-10T05:10:01.586Z] =================================================================================================================== 00:08:08.350 [2024-11-10T05:10:01.586Z] Total : 61881.04 241.72 0.00 0.00 14381.32 7309.78 31053.98 00:08:08.350 00:08:08.350 real 0m1.816s 00:08:08.350 user 0m1.528s 00:08:08.350 sys 0m0.171s 00:08:08.350 05:10:01 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:08.350 05:10:01 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:08:08.350 ************************************ 00:08:08.350 END TEST bdev_write_zeroes 00:08:08.350 ************************************ 00:08:08.350 05:10:01 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:08.350 05:10:01 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:08:08.350 05:10:01 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:08.350 05:10:01 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:08.649 ************************************ 00:08:08.649 START TEST bdev_json_nonenclosed 00:08:08.649 ************************************ 00:08:08.649 05:10:01 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:08.649 [2024-11-10 05:10:01.636269] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:08.649 [2024-11-10 05:10:01.636358] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74369 ] 00:08:08.649 [2024-11-10 05:10:01.774326] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.649 [2024-11-10 05:10:01.803662] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.649 [2024-11-10 05:10:01.803734] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:08.649 [2024-11-10 05:10:01.803748] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:08.649 [2024-11-10 05:10:01.803757] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:08.649 00:08:08.649 real 0m0.284s 00:08:08.649 user 0m0.097s 00:08:08.649 sys 0m0.084s 00:08:08.649 05:10:01 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:08.649 05:10:01 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:08:08.649 ************************************ 00:08:08.649 END TEST bdev_json_nonenclosed 00:08:08.649 ************************************ 00:08:08.911 05:10:01 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:08.911 05:10:01 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:08:08.911 05:10:01 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:08.911 05:10:01 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:08.911 ************************************ 00:08:08.911 START TEST bdev_json_nonarray 00:08:08.911 ************************************ 00:08:08.911 05:10:01 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:08.911 [2024-11-10 05:10:01.971726] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:08.911 [2024-11-10 05:10:01.971840] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74394 ] 00:08:08.911 [2024-11-10 05:10:02.118022] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.173 [2024-11-10 05:10:02.149866] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.173 [2024-11-10 05:10:02.149957] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:09.173 [2024-11-10 05:10:02.149973] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:09.173 [2024-11-10 05:10:02.149987] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:09.173 00:08:09.173 real 0m0.316s 00:08:09.173 user 0m0.121s 00:08:09.173 sys 0m0.092s 00:08:09.173 05:10:02 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:09.173 05:10:02 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:08:09.173 ************************************ 00:08:09.173 END TEST bdev_json_nonarray 00:08:09.173 ************************************ 00:08:09.173 05:10:02 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:08:09.173 05:10:02 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:08:09.173 05:10:02 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:08:09.173 05:10:02 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:09.173 05:10:02 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:09.173 05:10:02 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:09.173 ************************************ 00:08:09.173 START TEST bdev_gpt_uuid 00:08:09.173 ************************************ 00:08:09.173 05:10:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1125 -- # bdev_gpt_uuid 00:08:09.173 05:10:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:08:09.173 05:10:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:08:09.173 05:10:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=74414 00:08:09.173 05:10:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:09.173 05:10:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 74414 00:08:09.173 05:10:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:09.173 05:10:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@831 -- # '[' -z 74414 ']' 00:08:09.173 05:10:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:09.173 05:10:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:09.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:09.173 05:10:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:09.173 05:10:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:09.173 05:10:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:09.173 [2024-11-10 05:10:02.341610] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:09.173 [2024-11-10 05:10:02.341730] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74414 ] 00:08:09.434 [2024-11-10 05:10:02.488153] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.434 [2024-11-10 05:10:02.519291] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.006 05:10:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:10.006 05:10:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # return 0 00:08:10.006 05:10:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:10.006 05:10:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.006 05:10:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:10.266 Some configs were skipped because the RPC state that can call them passed over. 00:08:10.266 05:10:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.266 05:10:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:08:10.266 05:10:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.266 05:10:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:10.528 05:10:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.528 05:10:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:08:10.528 05:10:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.528 05:10:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:10.528 05:10:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.528 05:10:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:08:10.528 { 00:08:10.528 "name": "Nvme1n1p1", 00:08:10.528 "aliases": [ 00:08:10.528 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:08:10.528 ], 00:08:10.528 "product_name": "GPT Disk", 00:08:10.528 "block_size": 4096, 00:08:10.528 "num_blocks": 655104, 00:08:10.528 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:08:10.528 "assigned_rate_limits": { 00:08:10.528 "rw_ios_per_sec": 0, 00:08:10.528 "rw_mbytes_per_sec": 0, 00:08:10.528 "r_mbytes_per_sec": 0, 00:08:10.528 "w_mbytes_per_sec": 0 00:08:10.528 }, 00:08:10.528 "claimed": false, 00:08:10.528 "zoned": false, 00:08:10.528 "supported_io_types": { 00:08:10.528 "read": true, 00:08:10.528 "write": true, 00:08:10.528 "unmap": true, 00:08:10.528 "flush": true, 00:08:10.528 "reset": true, 00:08:10.528 "nvme_admin": false, 00:08:10.528 "nvme_io": false, 00:08:10.528 "nvme_io_md": false, 00:08:10.528 "write_zeroes": true, 00:08:10.528 "zcopy": false, 00:08:10.528 "get_zone_info": false, 00:08:10.528 "zone_management": false, 00:08:10.528 "zone_append": false, 00:08:10.528 "compare": true, 00:08:10.528 "compare_and_write": false, 00:08:10.528 "abort": true, 00:08:10.528 "seek_hole": false, 00:08:10.528 "seek_data": false, 00:08:10.528 "copy": true, 00:08:10.528 "nvme_iov_md": false 00:08:10.528 }, 00:08:10.528 "driver_specific": { 00:08:10.528 "gpt": { 00:08:10.528 "base_bdev": "Nvme1n1", 00:08:10.528 "offset_blocks": 256, 00:08:10.528 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:08:10.528 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:08:10.528 "partition_name": "SPDK_TEST_first" 00:08:10.528 } 00:08:10.528 } 00:08:10.528 } 00:08:10.528 ]' 00:08:10.528 05:10:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:08:10.528 05:10:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:08:10.528 05:10:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:08:10.528 05:10:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:08:10.528 05:10:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:08:10.528 05:10:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:08:10.528 05:10:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:08:10.528 05:10:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.528 05:10:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:10.528 05:10:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.528 05:10:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:08:10.528 { 00:08:10.528 "name": "Nvme1n1p2", 00:08:10.528 "aliases": [ 00:08:10.528 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:08:10.528 ], 00:08:10.528 "product_name": "GPT Disk", 00:08:10.528 "block_size": 4096, 00:08:10.528 "num_blocks": 655103, 00:08:10.528 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:08:10.528 "assigned_rate_limits": { 00:08:10.528 "rw_ios_per_sec": 0, 00:08:10.528 "rw_mbytes_per_sec": 0, 00:08:10.528 "r_mbytes_per_sec": 0, 00:08:10.528 "w_mbytes_per_sec": 0 00:08:10.528 }, 00:08:10.528 "claimed": false, 00:08:10.528 "zoned": false, 00:08:10.528 "supported_io_types": { 00:08:10.528 "read": true, 00:08:10.528 "write": true, 00:08:10.528 "unmap": true, 00:08:10.528 "flush": true, 00:08:10.528 "reset": true, 00:08:10.528 "nvme_admin": false, 00:08:10.528 "nvme_io": false, 00:08:10.528 "nvme_io_md": false, 00:08:10.528 "write_zeroes": true, 00:08:10.528 "zcopy": false, 00:08:10.528 "get_zone_info": false, 00:08:10.528 "zone_management": false, 00:08:10.528 "zone_append": false, 00:08:10.528 "compare": true, 00:08:10.528 "compare_and_write": false, 00:08:10.528 "abort": true, 00:08:10.528 "seek_hole": false, 00:08:10.528 "seek_data": false, 00:08:10.528 "copy": true, 00:08:10.528 "nvme_iov_md": false 00:08:10.528 }, 00:08:10.528 "driver_specific": { 00:08:10.528 "gpt": { 00:08:10.528 "base_bdev": "Nvme1n1", 00:08:10.528 "offset_blocks": 655360, 00:08:10.528 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:08:10.528 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:08:10.528 "partition_name": "SPDK_TEST_second" 00:08:10.528 } 00:08:10.528 } 00:08:10.528 } 00:08:10.528 ]' 00:08:10.528 05:10:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:08:10.528 05:10:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:08:10.528 05:10:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:08:10.528 05:10:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:08:10.528 05:10:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:08:10.528 05:10:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:08:10.528 05:10:03 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 74414 00:08:10.528 05:10:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@950 -- # '[' -z 74414 ']' 00:08:10.528 05:10:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # kill -0 74414 00:08:10.528 05:10:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # uname 00:08:10.528 05:10:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:10.528 05:10:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74414 00:08:10.528 killing process with pid 74414 00:08:10.528 05:10:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:10.529 05:10:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:10.529 05:10:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74414' 00:08:10.529 05:10:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@969 -- # kill 74414 00:08:10.529 05:10:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@974 -- # wait 74414 00:08:10.790 00:08:10.790 real 0m1.722s 00:08:10.790 user 0m1.885s 00:08:10.790 sys 0m0.321s 00:08:10.790 05:10:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:10.790 05:10:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:10.790 ************************************ 00:08:10.790 END TEST bdev_gpt_uuid 00:08:10.790 ************************************ 00:08:11.050 05:10:04 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:08:11.050 05:10:04 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:08:11.050 05:10:04 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:08:11.050 05:10:04 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:08:11.050 05:10:04 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:11.050 05:10:04 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:08:11.050 05:10:04 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:08:11.050 05:10:04 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:08:11.050 05:10:04 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:11.311 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:11.311 Waiting for block devices as requested 00:08:11.311 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:11.608 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:11.608 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:11.608 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:16.900 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:16.900 05:10:09 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:08:16.900 05:10:09 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:08:16.900 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:08:16.900 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:08:16.900 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:08:16.900 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:08:16.900 05:10:10 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:08:16.900 00:08:16.900 real 0m50.496s 00:08:16.900 user 1m3.790s 00:08:16.900 sys 0m7.195s 00:08:16.900 05:10:10 blockdev_nvme_gpt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:16.900 ************************************ 00:08:16.900 END TEST blockdev_nvme_gpt 00:08:16.900 ************************************ 00:08:16.900 05:10:10 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:16.900 05:10:10 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:16.900 05:10:10 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:16.900 05:10:10 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:16.900 05:10:10 -- common/autotest_common.sh@10 -- # set +x 00:08:16.900 ************************************ 00:08:16.900 START TEST nvme 00:08:16.900 ************************************ 00:08:16.900 05:10:10 nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:17.178 * Looking for test storage... 00:08:17.178 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:17.178 05:10:10 nvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:17.178 05:10:10 nvme -- common/autotest_common.sh@1681 -- # lcov --version 00:08:17.178 05:10:10 nvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:17.178 05:10:10 nvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:17.178 05:10:10 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:17.179 05:10:10 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:17.179 05:10:10 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:17.179 05:10:10 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:08:17.179 05:10:10 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:08:17.179 05:10:10 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:08:17.179 05:10:10 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:08:17.179 05:10:10 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:08:17.179 05:10:10 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:08:17.179 05:10:10 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:08:17.179 05:10:10 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:17.179 05:10:10 nvme -- scripts/common.sh@344 -- # case "$op" in 00:08:17.179 05:10:10 nvme -- scripts/common.sh@345 -- # : 1 00:08:17.179 05:10:10 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:17.179 05:10:10 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:17.179 05:10:10 nvme -- scripts/common.sh@365 -- # decimal 1 00:08:17.179 05:10:10 nvme -- scripts/common.sh@353 -- # local d=1 00:08:17.179 05:10:10 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:17.179 05:10:10 nvme -- scripts/common.sh@355 -- # echo 1 00:08:17.179 05:10:10 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:08:17.179 05:10:10 nvme -- scripts/common.sh@366 -- # decimal 2 00:08:17.179 05:10:10 nvme -- scripts/common.sh@353 -- # local d=2 00:08:17.179 05:10:10 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:17.179 05:10:10 nvme -- scripts/common.sh@355 -- # echo 2 00:08:17.179 05:10:10 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:08:17.179 05:10:10 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:17.179 05:10:10 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:17.179 05:10:10 nvme -- scripts/common.sh@368 -- # return 0 00:08:17.179 05:10:10 nvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:17.179 05:10:10 nvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:17.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.179 --rc genhtml_branch_coverage=1 00:08:17.179 --rc genhtml_function_coverage=1 00:08:17.179 --rc genhtml_legend=1 00:08:17.179 --rc geninfo_all_blocks=1 00:08:17.179 --rc geninfo_unexecuted_blocks=1 00:08:17.179 00:08:17.179 ' 00:08:17.179 05:10:10 nvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:17.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.179 --rc genhtml_branch_coverage=1 00:08:17.179 --rc genhtml_function_coverage=1 00:08:17.179 --rc genhtml_legend=1 00:08:17.179 --rc geninfo_all_blocks=1 00:08:17.179 --rc geninfo_unexecuted_blocks=1 00:08:17.179 00:08:17.179 ' 00:08:17.179 05:10:10 nvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:17.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.179 --rc genhtml_branch_coverage=1 00:08:17.179 --rc genhtml_function_coverage=1 00:08:17.179 --rc genhtml_legend=1 00:08:17.179 --rc geninfo_all_blocks=1 00:08:17.179 --rc geninfo_unexecuted_blocks=1 00:08:17.179 00:08:17.179 ' 00:08:17.179 05:10:10 nvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:17.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.179 --rc genhtml_branch_coverage=1 00:08:17.179 --rc genhtml_function_coverage=1 00:08:17.179 --rc genhtml_legend=1 00:08:17.179 --rc geninfo_all_blocks=1 00:08:17.179 --rc geninfo_unexecuted_blocks=1 00:08:17.179 00:08:17.179 ' 00:08:17.179 05:10:10 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:17.461 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:18.033 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:18.033 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:18.033 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:18.033 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:18.033 05:10:11 nvme -- nvme/nvme.sh@79 -- # uname 00:08:18.033 05:10:11 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:08:18.033 05:10:11 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:08:18.033 05:10:11 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:08:18.033 05:10:11 nvme -- common/autotest_common.sh@1082 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:08:18.033 05:10:11 nvme -- common/autotest_common.sh@1068 -- # _randomize_va_space=2 00:08:18.033 05:10:11 nvme -- common/autotest_common.sh@1069 -- # echo 0 00:08:18.033 05:10:11 nvme -- common/autotest_common.sh@1071 -- # stubpid=75039 00:08:18.033 Waiting for stub to ready for secondary processes... 00:08:18.033 05:10:11 nvme -- common/autotest_common.sh@1072 -- # echo Waiting for stub to ready for secondary processes... 00:08:18.033 05:10:11 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:18.033 05:10:11 nvme -- common/autotest_common.sh@1075 -- # [[ -e /proc/75039 ]] 00:08:18.033 05:10:11 nvme -- common/autotest_common.sh@1076 -- # sleep 1s 00:08:18.033 05:10:11 nvme -- common/autotest_common.sh@1070 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:08:18.033 [2024-11-10 05:10:11.222799] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:18.033 [2024-11-10 05:10:11.222924] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:08:18.976 [2024-11-10 05:10:11.956088] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:18.976 [2024-11-10 05:10:11.975201] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:08:18.976 [2024-11-10 05:10:11.975339] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:18.976 [2024-11-10 05:10:11.975433] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:08:18.976 [2024-11-10 05:10:11.989696] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:08:18.976 [2024-11-10 05:10:11.989812] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:18.976 [2024-11-10 05:10:11.995776] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:08:18.976 [2024-11-10 05:10:11.995936] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:08:18.976 [2024-11-10 05:10:11.996369] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:18.976 [2024-11-10 05:10:11.996495] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:08:18.976 [2024-11-10 05:10:11.996534] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:08:18.976 [2024-11-10 05:10:11.996902] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:18.976 [2024-11-10 05:10:11.997020] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:08:18.976 [2024-11-10 05:10:11.997051] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:08:18.976 [2024-11-10 05:10:11.997537] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:18.976 [2024-11-10 05:10:11.997643] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:08:18.976 [2024-11-10 05:10:11.997677] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:08:18.976 [2024-11-10 05:10:11.997722] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:08:18.976 [2024-11-10 05:10:11.997774] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:08:18.976 05:10:12 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:18.976 done. 00:08:18.976 05:10:12 nvme -- common/autotest_common.sh@1078 -- # echo done. 00:08:18.976 05:10:12 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:18.976 05:10:12 nvme -- common/autotest_common.sh@1101 -- # '[' 10 -le 1 ']' 00:08:18.976 05:10:12 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:18.976 05:10:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:18.976 ************************************ 00:08:18.976 START TEST nvme_reset 00:08:18.976 ************************************ 00:08:18.976 05:10:12 nvme.nvme_reset -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:19.238 Initializing NVMe Controllers 00:08:19.238 Skipping QEMU NVMe SSD at 0000:00:10.0 00:08:19.238 Skipping QEMU NVMe SSD at 0000:00:11.0 00:08:19.238 Skipping QEMU NVMe SSD at 0000:00:13.0 00:08:19.238 Skipping QEMU NVMe SSD at 0000:00:12.0 00:08:19.238 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:08:19.238 00:08:19.238 real 0m0.183s 00:08:19.238 user 0m0.054s 00:08:19.238 sys 0m0.087s 00:08:19.238 05:10:12 nvme.nvme_reset -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:19.238 ************************************ 00:08:19.238 END TEST nvme_reset 00:08:19.238 ************************************ 00:08:19.238 05:10:12 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:08:19.238 05:10:12 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:08:19.238 05:10:12 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:19.238 05:10:12 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:19.238 05:10:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:19.238 ************************************ 00:08:19.238 START TEST nvme_identify 00:08:19.238 ************************************ 00:08:19.238 05:10:12 nvme.nvme_identify -- common/autotest_common.sh@1125 -- # nvme_identify 00:08:19.238 05:10:12 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:08:19.238 05:10:12 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:08:19.238 05:10:12 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:08:19.238 05:10:12 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:08:19.238 05:10:12 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:19.238 05:10:12 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # local bdfs 00:08:19.238 05:10:12 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:19.238 05:10:12 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:19.238 05:10:12 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:19.503 05:10:12 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:08:19.503 05:10:12 nvme.nvme_identify -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:19.503 05:10:12 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:08:19.503 [2024-11-10 05:10:12.631305] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 75060 terminated unexpected 00:08:19.503 ===================================================== 00:08:19.503 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:19.503 ===================================================== 00:08:19.503 Controller Capabilities/Features 00:08:19.503 ================================ 00:08:19.503 Vendor ID: 1b36 00:08:19.503 Subsystem Vendor ID: 1af4 00:08:19.503 Serial Number: 12340 00:08:19.503 Model Number: QEMU NVMe Ctrl 00:08:19.503 Firmware Version: 8.0.0 00:08:19.503 Recommended Arb Burst: 6 00:08:19.503 IEEE OUI Identifier: 00 54 52 00:08:19.503 Multi-path I/O 00:08:19.503 May have multiple subsystem ports: No 00:08:19.503 May have multiple controllers: No 00:08:19.503 Associated with SR-IOV VF: No 00:08:19.503 Max Data Transfer Size: 524288 00:08:19.503 Max Number of Namespaces: 256 00:08:19.503 Max Number of I/O Queues: 64 00:08:19.503 NVMe Specification Version (VS): 1.4 00:08:19.503 NVMe Specification Version (Identify): 1.4 00:08:19.503 Maximum Queue Entries: 2048 00:08:19.503 Contiguous Queues Required: Yes 00:08:19.503 Arbitration Mechanisms Supported 00:08:19.503 Weighted Round Robin: Not Supported 00:08:19.503 Vendor Specific: Not Supported 00:08:19.503 Reset Timeout: 7500 ms 00:08:19.503 Doorbell Stride: 4 bytes 00:08:19.503 NVM Subsystem Reset: Not Supported 00:08:19.503 Command Sets Supported 00:08:19.503 NVM Command Set: Supported 00:08:19.503 Boot Partition: Not Supported 00:08:19.503 Memory Page Size Minimum: 4096 bytes 00:08:19.503 Memory Page Size Maximum: 65536 bytes 00:08:19.503 Persistent Memory Region: Not Supported 00:08:19.503 Optional Asynchronous Events Supported 00:08:19.503 Namespace Attribute Notices: Supported 00:08:19.503 Firmware Activation Notices: Not Supported 00:08:19.503 ANA Change Notices: Not Supported 00:08:19.503 PLE Aggregate Log Change Notices: Not Supported 00:08:19.503 LBA Status Info Alert Notices: Not Supported 00:08:19.503 EGE Aggregate Log Change Notices: Not Supported 00:08:19.503 Normal NVM Subsystem Shutdown event: Not Supported 00:08:19.503 Zone Descriptor Change Notices: Not Supported 00:08:19.503 Discovery Log Change Notices: Not Supported 00:08:19.503 Controller Attributes 00:08:19.503 128-bit Host Identifier: Not Supported 00:08:19.503 Non-Operational Permissive Mode: Not Supported 00:08:19.503 NVM Sets: Not Supported 00:08:19.503 Read Recovery Levels: Not Supported 00:08:19.503 Endurance Groups: Not Supported 00:08:19.503 Predictable Latency Mode: Not Supported 00:08:19.503 Traffic Based Keep ALive: Not Supported 00:08:19.503 Namespace Granularity: Not Supported 00:08:19.503 SQ Associations: Not Supported 00:08:19.503 UUID List: Not Supported 00:08:19.503 Multi-Domain Subsystem: Not Supported 00:08:19.503 Fixed Capacity Management: Not Supported 00:08:19.503 Variable Capacity Management: Not Supported 00:08:19.503 Delete Endurance Group: Not Supported 00:08:19.503 Delete NVM Set: Not Supported 00:08:19.503 Extended LBA Formats Supported: Supported 00:08:19.503 Flexible Data Placement Supported: Not Supported 00:08:19.503 00:08:19.503 Controller Memory Buffer Support 00:08:19.503 ================================ 00:08:19.503 Supported: No 00:08:19.503 00:08:19.503 Persistent Memory Region Support 00:08:19.503 ================================ 00:08:19.503 Supported: No 00:08:19.503 00:08:19.503 Admin Command Set Attributes 00:08:19.503 ============================ 00:08:19.503 Security Send/Receive: Not Supported 00:08:19.503 Format NVM: Supported 00:08:19.503 Firmware Activate/Download: Not Supported 00:08:19.503 Namespace Management: Supported 00:08:19.503 Device Self-Test: Not Supported 00:08:19.503 Directives: Supported 00:08:19.503 NVMe-MI: Not Supported 00:08:19.503 Virtualization Management: Not Supported 00:08:19.503 Doorbell Buffer Config: Supported 00:08:19.503 Get LBA Status Capability: Not Supported 00:08:19.503 Command & Feature Lockdown Capability: Not Supported 00:08:19.503 Abort Command Limit: 4 00:08:19.503 Async Event Request Limit: 4 00:08:19.503 Number of Firmware Slots: N/A 00:08:19.503 Firmware Slot 1 Read-Only: N/A 00:08:19.503 Firmware Activation Without Reset: N/A 00:08:19.503 Multiple Update Detection Support: N/A 00:08:19.503 Firmware Update Granularity: No Information Provided 00:08:19.503 Per-Namespace SMART Log: Yes 00:08:19.503 Asymmetric Namespace Access Log Page: Not Supported 00:08:19.503 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:19.503 Command Effects Log Page: Supported 00:08:19.503 Get Log Page Extended Data: Supported 00:08:19.503 Telemetry Log Pages: Not Supported 00:08:19.503 Persistent Event Log Pages: Not Supported 00:08:19.503 Supported Log Pages Log Page: May Support 00:08:19.503 Commands Supported & Effects Log Page: Not Supported 00:08:19.503 Feature Identifiers & Effects Log Page:May Support 00:08:19.503 NVMe-MI Commands & Effects Log Page: May Support 00:08:19.503 Data Area 4 for Telemetry Log: Not Supported 00:08:19.503 Error Log Page Entries Supported: 1 00:08:19.503 Keep Alive: Not Supported 00:08:19.503 00:08:19.503 NVM Command Set Attributes 00:08:19.503 ========================== 00:08:19.503 Submission Queue Entry Size 00:08:19.503 Max: 64 00:08:19.503 Min: 64 00:08:19.503 Completion Queue Entry Size 00:08:19.503 Max: 16 00:08:19.503 Min: 16 00:08:19.503 Number of Namespaces: 256 00:08:19.503 Compare Command: Supported 00:08:19.503 Write Uncorrectable Command: Not Supported 00:08:19.503 Dataset Management Command: Supported 00:08:19.503 Write Zeroes Command: Supported 00:08:19.503 Set Features Save Field: Supported 00:08:19.503 Reservations: Not Supported 00:08:19.503 Timestamp: Supported 00:08:19.503 Copy: Supported 00:08:19.503 Volatile Write Cache: Present 00:08:19.503 Atomic Write Unit (Normal): 1 00:08:19.503 Atomic Write Unit (PFail): 1 00:08:19.503 Atomic Compare & Write Unit: 1 00:08:19.503 Fused Compare & Write: Not Supported 00:08:19.503 Scatter-Gather List 00:08:19.503 SGL Command Set: Supported 00:08:19.503 SGL Keyed: Not Supported 00:08:19.503 SGL Bit Bucket Descriptor: Not Supported 00:08:19.503 SGL Metadata Pointer: Not Supported 00:08:19.503 Oversized SGL: Not Supported 00:08:19.503 SGL Metadata Address: Not Supported 00:08:19.503 SGL Offset: Not Supported 00:08:19.503 Transport SGL Data Block: Not Supported 00:08:19.503 Replay Protected Memory Block: Not Supported 00:08:19.503 00:08:19.503 Firmware Slot Information 00:08:19.503 ========================= 00:08:19.503 Active slot: 1 00:08:19.503 Slot 1 Firmware Revision: 1.0 00:08:19.503 00:08:19.503 00:08:19.503 Commands Supported and Effects 00:08:19.503 ============================== 00:08:19.503 Admin Commands 00:08:19.503 -------------- 00:08:19.503 Delete I/O Submission Queue (00h): Supported 00:08:19.503 Create I/O Submission Queue (01h): Supported 00:08:19.503 Get Log Page (02h): Supported 00:08:19.503 Delete I/O Completion Queue (04h): Supported 00:08:19.503 Create I/O Completion Queue (05h): Supported 00:08:19.504 Identify (06h): Supported 00:08:19.504 Abort (08h): Supported 00:08:19.504 Set Features (09h): Supported 00:08:19.504 Get Features (0Ah): Supported 00:08:19.504 Asynchronous Event Request (0Ch): Supported 00:08:19.504 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:19.504 Directive Send (19h): Supported 00:08:19.504 Directive Receive (1Ah): Supported 00:08:19.504 Virtualization Management (1Ch): Supported 00:08:19.504 Doorbell Buffer Config (7Ch): Supported 00:08:19.504 Format NVM (80h): Supported LBA-Change 00:08:19.504 I/O Commands 00:08:19.504 ------------ 00:08:19.504 Flush (00h): Supported LBA-Change 00:08:19.504 Write (01h): Supported LBA-Change 00:08:19.504 Read (02h): Supported 00:08:19.504 Compare (05h): Supported 00:08:19.504 Write Zeroes (08h): Supported LBA-Change 00:08:19.504 Dataset Management (09h): Supported LBA-Change 00:08:19.504 Unknown (0Ch): Supported 00:08:19.504 Unknown (12h): Supported 00:08:19.504 Copy (19h): Supported LBA-Change 00:08:19.504 Unknown (1Dh): Supported LBA-Change 00:08:19.504 00:08:19.504 Error Log 00:08:19.504 ========= 00:08:19.504 00:08:19.504 Arbitration 00:08:19.504 =========== 00:08:19.504 Arbitration Burst: no limit 00:08:19.504 00:08:19.504 Power Management 00:08:19.504 ================ 00:08:19.504 Number of Power States: 1 00:08:19.504 Current Power State: Power State #0 00:08:19.504 Power State #0: 00:08:19.504 Max Power: 25.00 W 00:08:19.504 Non-Operational State: Operational 00:08:19.504 Entry Latency: 16 microseconds 00:08:19.504 Exit Latency: 4 microseconds 00:08:19.504 Relative Read Throughput: 0 00:08:19.504 Relative Read Latency: 0 00:08:19.504 Relative Write Throughput: 0 00:08:19.504 Relative Write Latency: 0 00:08:19.504 Idle Power[2024-11-10 05:10:12.632438] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0] process 75060 terminated unexpected 00:08:19.504 : Not Reported 00:08:19.504 Active Power: Not Reported 00:08:19.504 Non-Operational Permissive Mode: Not Supported 00:08:19.504 00:08:19.504 Health Information 00:08:19.504 ================== 00:08:19.504 Critical Warnings: 00:08:19.504 Available Spare Space: OK 00:08:19.504 Temperature: OK 00:08:19.504 Device Reliability: OK 00:08:19.504 Read Only: No 00:08:19.504 Volatile Memory Backup: OK 00:08:19.504 Current Temperature: 323 Kelvin (50 Celsius) 00:08:19.504 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:19.504 Available Spare: 0% 00:08:19.504 Available Spare Threshold: 0% 00:08:19.504 Life Percentage Used: 0% 00:08:19.504 Data Units Read: 691 00:08:19.504 Data Units Written: 620 00:08:19.504 Host Read Commands: 40221 00:08:19.504 Host Write Commands: 40007 00:08:19.504 Controller Busy Time: 0 minutes 00:08:19.504 Power Cycles: 0 00:08:19.504 Power On Hours: 0 hours 00:08:19.504 Unsafe Shutdowns: 0 00:08:19.504 Unrecoverable Media Errors: 0 00:08:19.504 Lifetime Error Log Entries: 0 00:08:19.504 Warning Temperature Time: 0 minutes 00:08:19.504 Critical Temperature Time: 0 minutes 00:08:19.504 00:08:19.504 Number of Queues 00:08:19.504 ================ 00:08:19.504 Number of I/O Submission Queues: 64 00:08:19.504 Number of I/O Completion Queues: 64 00:08:19.504 00:08:19.504 ZNS Specific Controller Data 00:08:19.504 ============================ 00:08:19.504 Zone Append Size Limit: 0 00:08:19.504 00:08:19.504 00:08:19.504 Active Namespaces 00:08:19.504 ================= 00:08:19.504 Namespace ID:1 00:08:19.504 Error Recovery Timeout: Unlimited 00:08:19.504 Command Set Identifier: NVM (00h) 00:08:19.504 Deallocate: Supported 00:08:19.504 Deallocated/Unwritten Error: Supported 00:08:19.504 Deallocated Read Value: All 0x00 00:08:19.504 Deallocate in Write Zeroes: Not Supported 00:08:19.504 Deallocated Guard Field: 0xFFFF 00:08:19.504 Flush: Supported 00:08:19.504 Reservation: Not Supported 00:08:19.504 Metadata Transferred as: Separate Metadata Buffer 00:08:19.504 Namespace Sharing Capabilities: Private 00:08:19.504 Size (in LBAs): 1548666 (5GiB) 00:08:19.504 Capacity (in LBAs): 1548666 (5GiB) 00:08:19.504 Utilization (in LBAs): 1548666 (5GiB) 00:08:19.504 Thin Provisioning: Not Supported 00:08:19.504 Per-NS Atomic Units: No 00:08:19.504 Maximum Single Source Range Length: 128 00:08:19.504 Maximum Copy Length: 128 00:08:19.504 Maximum Source Range Count: 128 00:08:19.504 NGUID/EUI64 Never Reused: No 00:08:19.504 Namespace Write Protected: No 00:08:19.504 Number of LBA Formats: 8 00:08:19.504 Current LBA Format: LBA Format #07 00:08:19.504 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:19.504 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:19.504 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:19.504 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:19.504 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:19.504 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:19.504 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:19.504 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:19.504 00:08:19.504 NVM Specific Namespace Data 00:08:19.504 =========================== 00:08:19.504 Logical Block Storage Tag Mask: 0 00:08:19.504 Protection Information Capabilities: 00:08:19.504 16b Guard Protection Information Storage Tag Support: No 00:08:19.504 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:19.504 Storage Tag Check Read Support: No 00:08:19.504 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:19.504 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:19.504 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:19.504 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:19.504 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:19.504 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:19.504 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:19.504 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:19.504 ===================================================== 00:08:19.504 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:19.504 ===================================================== 00:08:19.504 Controller Capabilities/Features 00:08:19.504 ================================ 00:08:19.504 Vendor ID: 1b36 00:08:19.504 Subsystem Vendor ID: 1af4 00:08:19.504 Serial Number: 12341 00:08:19.504 Model Number: QEMU NVMe Ctrl 00:08:19.504 Firmware Version: 8.0.0 00:08:19.504 Recommended Arb Burst: 6 00:08:19.504 IEEE OUI Identifier: 00 54 52 00:08:19.504 Multi-path I/O 00:08:19.504 May have multiple subsystem ports: No 00:08:19.504 May have multiple controllers: No 00:08:19.504 Associated with SR-IOV VF: No 00:08:19.504 Max Data Transfer Size: 524288 00:08:19.504 Max Number of Namespaces: 256 00:08:19.504 Max Number of I/O Queues: 64 00:08:19.504 NVMe Specification Version (VS): 1.4 00:08:19.504 NVMe Specification Version (Identify): 1.4 00:08:19.504 Maximum Queue Entries: 2048 00:08:19.504 Contiguous Queues Required: Yes 00:08:19.504 Arbitration Mechanisms Supported 00:08:19.504 Weighted Round Robin: Not Supported 00:08:19.504 Vendor Specific: Not Supported 00:08:19.504 Reset Timeout: 7500 ms 00:08:19.504 Doorbell Stride: 4 bytes 00:08:19.504 NVM Subsystem Reset: Not Supported 00:08:19.504 Command Sets Supported 00:08:19.504 NVM Command Set: Supported 00:08:19.504 Boot Partition: Not Supported 00:08:19.504 Memory Page Size Minimum: 4096 bytes 00:08:19.504 Memory Page Size Maximum: 65536 bytes 00:08:19.504 Persistent Memory Region: Not Supported 00:08:19.504 Optional Asynchronous Events Supported 00:08:19.504 Namespace Attribute Notices: Supported 00:08:19.504 Firmware Activation Notices: Not Supported 00:08:19.504 ANA Change Notices: Not Supported 00:08:19.504 PLE Aggregate Log Change Notices: Not Supported 00:08:19.504 LBA Status Info Alert Notices: Not Supported 00:08:19.504 EGE Aggregate Log Change Notices: Not Supported 00:08:19.504 Normal NVM Subsystem Shutdown event: Not Supported 00:08:19.504 Zone Descriptor Change Notices: Not Supported 00:08:19.504 Discovery Log Change Notices: Not Supported 00:08:19.504 Controller Attributes 00:08:19.504 128-bit Host Identifier: Not Supported 00:08:19.504 Non-Operational Permissive Mode: Not Supported 00:08:19.504 NVM Sets: Not Supported 00:08:19.504 Read Recovery Levels: Not Supported 00:08:19.504 Endurance Groups: Not Supported 00:08:19.504 Predictable Latency Mode: Not Supported 00:08:19.504 Traffic Based Keep ALive: Not Supported 00:08:19.504 Namespace Granularity: Not Supported 00:08:19.504 SQ Associations: Not Supported 00:08:19.504 UUID List: Not Supported 00:08:19.504 Multi-Domain Subsystem: Not Supported 00:08:19.504 Fixed Capacity Management: Not Supported 00:08:19.504 Variable Capacity Management: Not Supported 00:08:19.504 Delete Endurance Group: Not Supported 00:08:19.504 Delete NVM Set: Not Supported 00:08:19.505 Extended LBA Formats Supported: Supported 00:08:19.505 Flexible Data Placement Supported: Not Supported 00:08:19.505 00:08:19.505 Controller Memory Buffer Support 00:08:19.505 ================================ 00:08:19.505 Supported: No 00:08:19.505 00:08:19.505 Persistent Memory Region Support 00:08:19.505 ================================ 00:08:19.505 Supported: No 00:08:19.505 00:08:19.505 Admin Command Set Attributes 00:08:19.505 ============================ 00:08:19.505 Security Send/Receive: Not Supported 00:08:19.505 Format NVM: Supported 00:08:19.505 Firmware Activate/Download: Not Supported 00:08:19.505 Namespace Management: Supported 00:08:19.505 Device Self-Test: Not Supported 00:08:19.505 Directives: Supported 00:08:19.505 NVMe-MI: Not Supported 00:08:19.505 Virtualization Management: Not Supported 00:08:19.505 Doorbell Buffer Config: Supported 00:08:19.505 Get LBA Status Capability: Not Supported 00:08:19.505 Command & Feature Lockdown Capability: Not Supported 00:08:19.505 Abort Command Limit: 4 00:08:19.505 Async Event Request Limit: 4 00:08:19.505 Number of Firmware Slots: N/A 00:08:19.505 Firmware Slot 1 Read-Only: N/A 00:08:19.505 Firmware Activation Without Reset: N/A 00:08:19.505 Multiple Update Detection Support: N/A 00:08:19.505 Firmware Update Granularity: No Information Provided 00:08:19.505 Per-Namespace SMART Log: Yes 00:08:19.505 Asymmetric Namespace Access Log Page: Not Supported 00:08:19.505 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:19.505 Command Effects Log Page: Supported 00:08:19.505 Get Log Page Extended Data: Supported 00:08:19.505 Telemetry Log Pages: Not Supported 00:08:19.505 Persistent Event Log Pages: Not Supported 00:08:19.505 Supported Log Pages Log Page: May Support 00:08:19.505 Commands Supported & Effects Log Page: Not Supported 00:08:19.505 Feature Identifiers & Effects Log Page:May Support 00:08:19.505 NVMe-MI Commands & Effects Log Page: May Support 00:08:19.505 Data Area 4 for Telemetry Log: Not Supported 00:08:19.505 Error Log Page Entries Supported: 1 00:08:19.505 Keep Alive: Not Supported 00:08:19.505 00:08:19.505 NVM Command Set Attributes 00:08:19.505 ========================== 00:08:19.505 Submission Queue Entry Size 00:08:19.505 Max: 64 00:08:19.505 Min: 64 00:08:19.505 Completion Queue Entry Size 00:08:19.505 Max: 16 00:08:19.505 Min: 16 00:08:19.505 Number of Namespaces: 256 00:08:19.505 Compare Command: Supported 00:08:19.505 Write Uncorrectable Command: Not Supported 00:08:19.505 Dataset Management Command: Supported 00:08:19.505 Write Zeroes Command: Supported 00:08:19.505 Set Features Save Field: Supported 00:08:19.505 Reservations: Not Supported 00:08:19.505 Timestamp: Supported 00:08:19.505 Copy: Supported 00:08:19.505 Volatile Write Cache: Present 00:08:19.505 Atomic Write Unit (Normal): 1 00:08:19.505 Atomic Write Unit (PFail): 1 00:08:19.505 Atomic Compare & Write Unit: 1 00:08:19.505 Fused Compare & Write: Not Supported 00:08:19.505 Scatter-Gather List 00:08:19.505 SGL Command Set: Supported 00:08:19.505 SGL Keyed: Not Supported 00:08:19.505 SGL Bit Bucket Descriptor: Not Supported 00:08:19.505 SGL Metadata Pointer: Not Supported 00:08:19.505 Oversized SGL: Not Supported 00:08:19.505 SGL Metadata Address: Not Supported 00:08:19.505 SGL Offset: Not Supported 00:08:19.505 Transport SGL Data Block: Not Supported 00:08:19.505 Replay Protected Memory Block: Not Supported 00:08:19.505 00:08:19.505 Firmware Slot Information 00:08:19.505 ========================= 00:08:19.505 Active slot: 1 00:08:19.505 Slot 1 Firmware Revision: 1.0 00:08:19.505 00:08:19.505 00:08:19.505 Commands Supported and Effects 00:08:19.505 ============================== 00:08:19.505 Admin Commands 00:08:19.505 -------------- 00:08:19.505 Delete I/O Submission Queue (00h): Supported 00:08:19.505 Create I/O Submission Queue (01h): Supported 00:08:19.505 Get Log Page (02h): Supported 00:08:19.505 Delete I/O Completion Queue (04h): Supported 00:08:19.505 Create I/O Completion Queue (05h): Supported 00:08:19.505 Identify (06h): Supported 00:08:19.505 Abort (08h): Supported 00:08:19.505 Set Features (09h): Supported 00:08:19.505 Get Features (0Ah): Supported 00:08:19.505 Asynchronous Event Request (0Ch): Supported 00:08:19.505 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:19.505 Directive Send (19h): Supported 00:08:19.505 Directive Receive (1Ah): Supported 00:08:19.505 Virtualization Management (1Ch): Supported 00:08:19.505 Doorbell Buffer Config (7Ch): Supported 00:08:19.505 Format NVM (80h): Supported LBA-Change 00:08:19.505 I/O Commands 00:08:19.505 ------------ 00:08:19.505 Flush (00h): Supported LBA-Change 00:08:19.505 Write (01h): Supported LBA-Change 00:08:19.505 Read (02h): Supported 00:08:19.505 Compare (05h): Supported 00:08:19.505 Write Zeroes (08h): Supported LBA-Change 00:08:19.505 Dataset Management (09h): Supported LBA-Change 00:08:19.505 Unknown (0Ch): Supported 00:08:19.505 Unknown (12h): Supported 00:08:19.505 Copy (19h): Supported LBA-Change 00:08:19.505 Unknown (1Dh): Supported LBA-Change 00:08:19.505 00:08:19.505 Error Log 00:08:19.505 ========= 00:08:19.505 00:08:19.505 Arbitration 00:08:19.505 =========== 00:08:19.505 Arbitration Burst: no limit 00:08:19.505 00:08:19.505 Power Management 00:08:19.505 ================ 00:08:19.505 Number of Power States: 1 00:08:19.505 Current Power State: Power State #0 00:08:19.505 Power State #0: 00:08:19.505 Max Power: 25.00 W 00:08:19.505 Non-Operational State: Operational 00:08:19.505 Entry Latency: 16 microseconds 00:08:19.505 Exit Latency: 4 microseconds 00:08:19.505 Relative Read Throughput: 0 00:08:19.505 Relative Read Latency: 0 00:08:19.505 Relative Write Throughput: 0 00:08:19.505 Relative Write Latency: 0 00:08:19.505 Idle Power: Not Reported 00:08:19.505 Active Power: Not Reported 00:08:19.505 Non-Operational Permissive Mode: Not Supported 00:08:19.505 00:08:19.505 Health Information 00:08:19.505 ================== 00:08:19.505 Critical Warnings: 00:08:19.505 Available Spare Space: OK 00:08:19.505 Temperature: [2024-11-10 05:10:12.633133] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0] process 75060 terminated unexpected 00:08:19.505 OK 00:08:19.505 Device Reliability: OK 00:08:19.505 Read Only: No 00:08:19.505 Volatile Memory Backup: OK 00:08:19.505 Current Temperature: 323 Kelvin (50 Celsius) 00:08:19.505 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:19.505 Available Spare: 0% 00:08:19.505 Available Spare Threshold: 0% 00:08:19.505 Life Percentage Used: 0% 00:08:19.505 Data Units Read: 1065 00:08:19.505 Data Units Written: 930 00:08:19.505 Host Read Commands: 60490 00:08:19.505 Host Write Commands: 59234 00:08:19.505 Controller Busy Time: 0 minutes 00:08:19.505 Power Cycles: 0 00:08:19.505 Power On Hours: 0 hours 00:08:19.505 Unsafe Shutdowns: 0 00:08:19.505 Unrecoverable Media Errors: 0 00:08:19.505 Lifetime Error Log Entries: 0 00:08:19.505 Warning Temperature Time: 0 minutes 00:08:19.505 Critical Temperature Time: 0 minutes 00:08:19.505 00:08:19.505 Number of Queues 00:08:19.505 ================ 00:08:19.505 Number of I/O Submission Queues: 64 00:08:19.505 Number of I/O Completion Queues: 64 00:08:19.505 00:08:19.505 ZNS Specific Controller Data 00:08:19.505 ============================ 00:08:19.505 Zone Append Size Limit: 0 00:08:19.505 00:08:19.505 00:08:19.505 Active Namespaces 00:08:19.505 ================= 00:08:19.505 Namespace ID:1 00:08:19.505 Error Recovery Timeout: Unlimited 00:08:19.505 Command Set Identifier: NVM (00h) 00:08:19.505 Deallocate: Supported 00:08:19.505 Deallocated/Unwritten Error: Supported 00:08:19.505 Deallocated Read Value: All 0x00 00:08:19.505 Deallocate in Write Zeroes: Not Supported 00:08:19.505 Deallocated Guard Field: 0xFFFF 00:08:19.505 Flush: Supported 00:08:19.505 Reservation: Not Supported 00:08:19.505 Namespace Sharing Capabilities: Private 00:08:19.505 Size (in LBAs): 1310720 (5GiB) 00:08:19.505 Capacity (in LBAs): 1310720 (5GiB) 00:08:19.505 Utilization (in LBAs): 1310720 (5GiB) 00:08:19.505 Thin Provisioning: Not Supported 00:08:19.505 Per-NS Atomic Units: No 00:08:19.505 Maximum Single Source Range Length: 128 00:08:19.505 Maximum Copy Length: 128 00:08:19.505 Maximum Source Range Count: 128 00:08:19.505 NGUID/EUI64 Never Reused: No 00:08:19.505 Namespace Write Protected: No 00:08:19.505 Number of LBA Formats: 8 00:08:19.505 Current LBA Format: LBA Format #04 00:08:19.505 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:19.505 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:19.505 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:19.505 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:19.505 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:19.506 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:19.506 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:19.506 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:19.506 00:08:19.506 NVM Specific Namespace Data 00:08:19.506 =========================== 00:08:19.506 Logical Block Storage Tag Mask: 0 00:08:19.506 Protection Information Capabilities: 00:08:19.506 16b Guard Protection Information Storage Tag Support: No 00:08:19.506 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:19.506 Storage Tag Check Read Support: No 00:08:19.506 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:19.506 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:19.506 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:19.506 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:19.506 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:19.506 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:19.506 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:19.506 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:19.506 ===================================================== 00:08:19.506 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:19.506 ===================================================== 00:08:19.506 Controller Capabilities/Features 00:08:19.506 ================================ 00:08:19.506 Vendor ID: 1b36 00:08:19.506 Subsystem Vendor ID: 1af4 00:08:19.506 Serial Number: 12343 00:08:19.506 Model Number: QEMU NVMe Ctrl 00:08:19.506 Firmware Version: 8.0.0 00:08:19.506 Recommended Arb Burst: 6 00:08:19.506 IEEE OUI Identifier: 00 54 52 00:08:19.506 Multi-path I/O 00:08:19.506 May have multiple subsystem ports: No 00:08:19.506 May have multiple controllers: Yes 00:08:19.506 Associated with SR-IOV VF: No 00:08:19.506 Max Data Transfer Size: 524288 00:08:19.506 Max Number of Namespaces: 256 00:08:19.506 Max Number of I/O Queues: 64 00:08:19.506 NVMe Specification Version (VS): 1.4 00:08:19.506 NVMe Specification Version (Identify): 1.4 00:08:19.506 Maximum Queue Entries: 2048 00:08:19.506 Contiguous Queues Required: Yes 00:08:19.506 Arbitration Mechanisms Supported 00:08:19.506 Weighted Round Robin: Not Supported 00:08:19.506 Vendor Specific: Not Supported 00:08:19.506 Reset Timeout: 7500 ms 00:08:19.506 Doorbell Stride: 4 bytes 00:08:19.506 NVM Subsystem Reset: Not Supported 00:08:19.506 Command Sets Supported 00:08:19.506 NVM Command Set: Supported 00:08:19.506 Boot Partition: Not Supported 00:08:19.506 Memory Page Size Minimum: 4096 bytes 00:08:19.506 Memory Page Size Maximum: 65536 bytes 00:08:19.506 Persistent Memory Region: Not Supported 00:08:19.506 Optional Asynchronous Events Supported 00:08:19.506 Namespace Attribute Notices: Supported 00:08:19.506 Firmware Activation Notices: Not Supported 00:08:19.506 ANA Change Notices: Not Supported 00:08:19.506 PLE Aggregate Log Change Notices: Not Supported 00:08:19.506 LBA Status Info Alert Notices: Not Supported 00:08:19.506 EGE Aggregate Log Change Notices: Not Supported 00:08:19.506 Normal NVM Subsystem Shutdown event: Not Supported 00:08:19.506 Zone Descriptor Change Notices: Not Supported 00:08:19.506 Discovery Log Change Notices: Not Supported 00:08:19.506 Controller Attributes 00:08:19.506 128-bit Host Identifier: Not Supported 00:08:19.506 Non-Operational Permissive Mode: Not Supported 00:08:19.506 NVM Sets: Not Supported 00:08:19.506 Read Recovery Levels: Not Supported 00:08:19.506 Endurance Groups: Supported 00:08:19.506 Predictable Latency Mode: Not Supported 00:08:19.506 Traffic Based Keep ALive: Not Supported 00:08:19.506 Namespace Granularity: Not Supported 00:08:19.506 SQ Associations: Not Supported 00:08:19.506 UUID List: Not Supported 00:08:19.506 Multi-Domain Subsystem: Not Supported 00:08:19.506 Fixed Capacity Management: Not Supported 00:08:19.506 Variable Capacity Management: Not Supported 00:08:19.506 Delete Endurance Group: Not Supported 00:08:19.506 Delete NVM Set: Not Supported 00:08:19.506 Extended LBA Formats Supported: Supported 00:08:19.506 Flexible Data Placement Supported: Supported 00:08:19.506 00:08:19.506 Controller Memory Buffer Support 00:08:19.506 ================================ 00:08:19.506 Supported: No 00:08:19.506 00:08:19.506 Persistent Memory Region Support 00:08:19.506 ================================ 00:08:19.506 Supported: No 00:08:19.506 00:08:19.506 Admin Command Set Attributes 00:08:19.506 ============================ 00:08:19.506 Security Send/Receive: Not Supported 00:08:19.506 Format NVM: Supported 00:08:19.506 Firmware Activate/Download: Not Supported 00:08:19.506 Namespace Management: Supported 00:08:19.506 Device Self-Test: Not Supported 00:08:19.506 Directives: Supported 00:08:19.506 NVMe-MI: Not Supported 00:08:19.506 Virtualization Management: Not Supported 00:08:19.506 Doorbell Buffer Config: Supported 00:08:19.506 Get LBA Status Capability: Not Supported 00:08:19.506 Command & Feature Lockdown Capability: Not Supported 00:08:19.506 Abort Command Limit: 4 00:08:19.506 Async Event Request Limit: 4 00:08:19.506 Number of Firmware Slots: N/A 00:08:19.506 Firmware Slot 1 Read-Only: N/A 00:08:19.506 Firmware Activation Without Reset: N/A 00:08:19.506 Multiple Update Detection Support: N/A 00:08:19.506 Firmware Update Granularity: No Information Provided 00:08:19.506 Per-Namespace SMART Log: Yes 00:08:19.506 Asymmetric Namespace Access Log Page: Not Supported 00:08:19.506 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:19.506 Command Effects Log Page: Supported 00:08:19.506 Get Log Page Extended Data: Supported 00:08:19.506 Telemetry Log Pages: Not Supported 00:08:19.506 Persistent Event Log Pages: Not Supported 00:08:19.506 Supported Log Pages Log Page: May Support 00:08:19.506 Commands Supported & Effects Log Page: Not Supported 00:08:19.506 Feature Identifiers & Effects Log Page:May Support 00:08:19.506 NVMe-MI Commands & Effects Log Page: May Support 00:08:19.506 Data Area 4 for Telemetry Log: Not Supported 00:08:19.506 Error Log Page Entries Supported: 1 00:08:19.506 Keep Alive: Not Supported 00:08:19.506 00:08:19.506 NVM Command Set Attributes 00:08:19.506 ========================== 00:08:19.506 Submission Queue Entry Size 00:08:19.506 Max: 64 00:08:19.506 Min: 64 00:08:19.506 Completion Queue Entry Size 00:08:19.506 Max: 16 00:08:19.506 Min: 16 00:08:19.506 Number of Namespaces: 256 00:08:19.506 Compare Command: Supported 00:08:19.506 Write Uncorrectable Command: Not Supported 00:08:19.506 Dataset Management Command: Supported 00:08:19.506 Write Zeroes Command: Supported 00:08:19.506 Set Features Save Field: Supported 00:08:19.506 Reservations: Not Supported 00:08:19.506 Timestamp: Supported 00:08:19.506 Copy: Supported 00:08:19.506 Volatile Write Cache: Present 00:08:19.506 Atomic Write Unit (Normal): 1 00:08:19.506 Atomic Write Unit (PFail): 1 00:08:19.506 Atomic Compare & Write Unit: 1 00:08:19.506 Fused Compare & Write: Not Supported 00:08:19.506 Scatter-Gather List 00:08:19.506 SGL Command Set: Supported 00:08:19.506 SGL Keyed: Not Supported 00:08:19.506 SGL Bit Bucket Descriptor: Not Supported 00:08:19.506 SGL Metadata Pointer: Not Supported 00:08:19.506 Oversized SGL: Not Supported 00:08:19.506 SGL Metadata Address: Not Supported 00:08:19.506 SGL Offset: Not Supported 00:08:19.506 Transport SGL Data Block: Not Supported 00:08:19.506 Replay Protected Memory Block: Not Supported 00:08:19.506 00:08:19.506 Firmware Slot Information 00:08:19.506 ========================= 00:08:19.506 Active slot: 1 00:08:19.506 Slot 1 Firmware Revision: 1.0 00:08:19.506 00:08:19.506 00:08:19.506 Commands Supported and Effects 00:08:19.506 ============================== 00:08:19.506 Admin Commands 00:08:19.506 -------------- 00:08:19.506 Delete I/O Submission Queue (00h): Supported 00:08:19.506 Create I/O Submission Queue (01h): Supported 00:08:19.506 Get Log Page (02h): Supported 00:08:19.506 Delete I/O Completion Queue (04h): Supported 00:08:19.506 Create I/O Completion Queue (05h): Supported 00:08:19.506 Identify (06h): Supported 00:08:19.506 Abort (08h): Supported 00:08:19.506 Set Features (09h): Supported 00:08:19.506 Get Features (0Ah): Supported 00:08:19.506 Asynchronous Event Request (0Ch): Supported 00:08:19.506 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:19.506 Directive Send (19h): Supported 00:08:19.507 Directive Receive (1Ah): Supported 00:08:19.507 Virtualization Management (1Ch): Supported 00:08:19.507 Doorbell Buffer Config (7Ch): Supported 00:08:19.507 Format NVM (80h): Supported LBA-Change 00:08:19.507 I/O Commands 00:08:19.507 ------------ 00:08:19.507 Flush (00h): Supported LBA-Change 00:08:19.507 Write (01h): Supported LBA-Change 00:08:19.507 Read (02h): Supported 00:08:19.507 Compare (05h): Supported 00:08:19.507 Write Zeroes (08h): Supported LBA-Change 00:08:19.507 Dataset Management (09h): Supported LBA-Change 00:08:19.507 Unknown (0Ch): Supported 00:08:19.507 Unknown (12h): Supported 00:08:19.507 Copy (19h): Supported LBA-Change 00:08:19.507 Unknown (1Dh): Supported LBA-Change 00:08:19.507 00:08:19.507 Error Log 00:08:19.507 ========= 00:08:19.507 00:08:19.507 Arbitration 00:08:19.507 =========== 00:08:19.507 Arbitration Burst: no limit 00:08:19.507 00:08:19.507 Power Management 00:08:19.507 ================ 00:08:19.507 Number of Power States: 1 00:08:19.507 Current Power State: Power State #0 00:08:19.507 Power State #0: 00:08:19.507 Max Power: 25.00 W 00:08:19.507 Non-Operational State: Operational 00:08:19.507 Entry Latency: 16 microseconds 00:08:19.507 Exit Latency: 4 microseconds 00:08:19.507 Relative Read Throughput: 0 00:08:19.507 Relative Read Latency: 0 00:08:19.507 Relative Write Throughput: 0 00:08:19.507 Relative Write Latency: 0 00:08:19.507 Idle Power: Not Reported 00:08:19.507 Active Power: Not Reported 00:08:19.507 Non-Operational Permissive Mode: Not Supported 00:08:19.507 00:08:19.507 Health Information 00:08:19.507 ================== 00:08:19.507 Critical Warnings: 00:08:19.507 Available Spare Space: OK 00:08:19.507 Temperature: OK 00:08:19.507 Device Reliability: OK 00:08:19.507 Read Only: No 00:08:19.507 Volatile Memory Backup: OK 00:08:19.507 Current Temperature: 323 Kelvin (50 Celsius) 00:08:19.507 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:19.507 Available Spare: 0% 00:08:19.507 Available Spare Threshold: 0% 00:08:19.507 Life Percentage Used: 0% 00:08:19.507 Data Units Read: 950 00:08:19.507 Data Units Written: 879 00:08:19.507 Host Read Commands: 42840 00:08:19.507 Host Write Commands: 42263 00:08:19.507 Controller Busy Time: 0 minutes 00:08:19.507 Power Cycles: 0 00:08:19.507 Power On Hours: 0 hours 00:08:19.507 Unsafe Shutdowns: 0 00:08:19.507 Unrecoverable Media Errors: 0 00:08:19.507 Lifetime Error Log Entries: 0 00:08:19.507 Warning Temperature Time: 0 minutes 00:08:19.507 Critical Temperature Time: 0 minutes 00:08:19.507 00:08:19.507 Number of Queues 00:08:19.507 ================ 00:08:19.507 Number of I/O Submission Queues: 64 00:08:19.507 Number of I/O Completion Queues: 64 00:08:19.507 00:08:19.507 ZNS Specific Controller Data 00:08:19.507 ============================ 00:08:19.507 Zone Append Size Limit: 0 00:08:19.507 00:08:19.507 00:08:19.507 Active Namespaces 00:08:19.507 ================= 00:08:19.507 Namespace ID:1 00:08:19.507 Error Recovery Timeout: Unlimited 00:08:19.507 Command Set Identifier: NVM (00h) 00:08:19.507 Deallocate: Supported 00:08:19.507 Deallocated/Unwritten Error: Supported 00:08:19.507 Deallocated Read Value: All 0x00 00:08:19.507 Deallocate in Write Zeroes: Not Supported 00:08:19.507 Deallocated Guard Field: 0xFFFF 00:08:19.507 Flush: Supported 00:08:19.507 Reservation: Not Supported 00:08:19.507 Namespace Sharing Capabilities: Multiple Controllers 00:08:19.507 Size (in LBAs): 262144 (1GiB) 00:08:19.507 Capacity (in LBAs): 262144 (1GiB) 00:08:19.507 Utilization (in LBAs): 262144 (1GiB) 00:08:19.507 Thin Provisioning: Not Supported 00:08:19.507 Per-NS Atomic Units: No 00:08:19.507 Maximum Single Source Range Length: 128 00:08:19.507 Maximum Copy Length: 128 00:08:19.507 Maximum Source Range Count: 128 00:08:19.507 NGUID/EUI64 Never Reused: No 00:08:19.507 Namespace Write Protected: No 00:08:19.507 Endurance group ID: 1 00:08:19.507 Number of LBA Formats: 8 00:08:19.507 Current LBA Format: LBA Format #04 00:08:19.507 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:19.507 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:19.507 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:19.507 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:19.507 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:19.507 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:19.507 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:19.507 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:19.507 00:08:19.507 Get Feature FDP: 00:08:19.507 ================ 00:08:19.507 Enabled: Yes 00:08:19.507 FDP configuration index: 0 00:08:19.507 00:08:19.507 FDP configurations log page 00:08:19.507 =========================== 00:08:19.507 Number of FDP configurations: 1 00:08:19.507 Version: 0 00:08:19.507 Size: 112 00:08:19.507 FDP Configuration Descriptor: 0 00:08:19.507 Descriptor Size: 96 00:08:19.507 Reclaim Group Identifier format: 2 00:08:19.507 FDP Volatile Write Cache: Not Present 00:08:19.507 FDP Configuration: Valid 00:08:19.507 Vendor Specific Size: 0 00:08:19.507 Number of Reclaim Groups: 2 00:08:19.507 Number of Recalim Unit Handles: 8 00:08:19.507 Max Placement Identifiers: 128 00:08:19.507 Number of Namespaces Suppprted: 256 00:08:19.507 Reclaim unit Nominal Size: 6000000 bytes 00:08:19.507 Estimated Reclaim Unit Time Limit: Not Reported 00:08:19.507 RUH Desc #000: RUH Type: Initially Isolated 00:08:19.507 RUH Desc #001: RUH Type: Initially Isolated 00:08:19.507 RUH Desc #002: RUH Type: Initially Isolated 00:08:19.507 RUH Desc #003: RUH Type: Initially Isolated 00:08:19.507 RUH Desc #004: RUH Type: Initially Isolated 00:08:19.507 RUH Desc #005: RUH Type: Initially Isolated 00:08:19.507 RUH Desc #006: RUH Type: Initially Isolated 00:08:19.507 RUH Desc #007: RUH Type: Initially Isolated 00:08:19.507 00:08:19.507 FDP reclaim unit handle usage log page 00:08:19.507 ====================================== 00:08:19.507 Number of Reclaim Unit Handles: 8 00:08:19.507 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:19.507 RUH Usage Desc #001: RUH Attributes: Unused 00:08:19.507 RUH Usage Desc #002: RUH Attributes: Unused 00:08:19.507 RUH Usage Desc #003: RUH Attributes: Unused 00:08:19.507 RUH Usage Desc #004: RUH Attributes: Unused 00:08:19.507 RUH Usage Desc #005: RUH Attributes: Unused 00:08:19.507 RUH Usage Desc #006: RUH Attributes: Unused 00:08:19.507 RUH Usage Desc #007: RUH Attributes: Unused 00:08:19.507 00:08:19.507 FDP statistics log page 00:08:19.507 ======================= 00:08:19.507 Host bytes with metadata written: 554213376 00:08:19.507 Medi[2024-11-10 05:10:12.634655] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0] process 75060 terminated unexpected 00:08:19.507 a bytes with metadata written: 555798528 00:08:19.507 Media bytes erased: 0 00:08:19.507 00:08:19.507 FDP events log page 00:08:19.507 =================== 00:08:19.507 Number of FDP events: 0 00:08:19.507 00:08:19.507 NVM Specific Namespace Data 00:08:19.507 =========================== 00:08:19.507 Logical Block Storage Tag Mask: 0 00:08:19.507 Protection Information Capabilities: 00:08:19.507 16b Guard Protection Information Storage Tag Support: No 00:08:19.507 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:19.507 Storage Tag Check Read Support: No 00:08:19.507 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:19.507 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:19.507 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:19.507 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:19.507 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:19.507 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:19.507 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:19.507 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:19.507 ===================================================== 00:08:19.507 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:19.507 ===================================================== 00:08:19.507 Controller Capabilities/Features 00:08:19.507 ================================ 00:08:19.507 Vendor ID: 1b36 00:08:19.507 Subsystem Vendor ID: 1af4 00:08:19.507 Serial Number: 12342 00:08:19.507 Model Number: QEMU NVMe Ctrl 00:08:19.507 Firmware Version: 8.0.0 00:08:19.507 Recommended Arb Burst: 6 00:08:19.507 IEEE OUI Identifier: 00 54 52 00:08:19.507 Multi-path I/O 00:08:19.507 May have multiple subsystem ports: No 00:08:19.507 May have multiple controllers: No 00:08:19.508 Associated with SR-IOV VF: No 00:08:19.508 Max Data Transfer Size: 524288 00:08:19.508 Max Number of Namespaces: 256 00:08:19.508 Max Number of I/O Queues: 64 00:08:19.508 NVMe Specification Version (VS): 1.4 00:08:19.508 NVMe Specification Version (Identify): 1.4 00:08:19.508 Maximum Queue Entries: 2048 00:08:19.508 Contiguous Queues Required: Yes 00:08:19.508 Arbitration Mechanisms Supported 00:08:19.508 Weighted Round Robin: Not Supported 00:08:19.508 Vendor Specific: Not Supported 00:08:19.508 Reset Timeout: 7500 ms 00:08:19.508 Doorbell Stride: 4 bytes 00:08:19.508 NVM Subsystem Reset: Not Supported 00:08:19.508 Command Sets Supported 00:08:19.508 NVM Command Set: Supported 00:08:19.508 Boot Partition: Not Supported 00:08:19.508 Memory Page Size Minimum: 4096 bytes 00:08:19.508 Memory Page Size Maximum: 65536 bytes 00:08:19.508 Persistent Memory Region: Not Supported 00:08:19.508 Optional Asynchronous Events Supported 00:08:19.508 Namespace Attribute Notices: Supported 00:08:19.508 Firmware Activation Notices: Not Supported 00:08:19.508 ANA Change Notices: Not Supported 00:08:19.508 PLE Aggregate Log Change Notices: Not Supported 00:08:19.508 LBA Status Info Alert Notices: Not Supported 00:08:19.508 EGE Aggregate Log Change Notices: Not Supported 00:08:19.508 Normal NVM Subsystem Shutdown event: Not Supported 00:08:19.508 Zone Descriptor Change Notices: Not Supported 00:08:19.508 Discovery Log Change Notices: Not Supported 00:08:19.508 Controller Attributes 00:08:19.508 128-bit Host Identifier: Not Supported 00:08:19.508 Non-Operational Permissive Mode: Not Supported 00:08:19.508 NVM Sets: Not Supported 00:08:19.508 Read Recovery Levels: Not Supported 00:08:19.508 Endurance Groups: Not Supported 00:08:19.508 Predictable Latency Mode: Not Supported 00:08:19.508 Traffic Based Keep ALive: Not Supported 00:08:19.508 Namespace Granularity: Not Supported 00:08:19.508 SQ Associations: Not Supported 00:08:19.508 UUID List: Not Supported 00:08:19.508 Multi-Domain Subsystem: Not Supported 00:08:19.508 Fixed Capacity Management: Not Supported 00:08:19.508 Variable Capacity Management: Not Supported 00:08:19.508 Delete Endurance Group: Not Supported 00:08:19.508 Delete NVM Set: Not Supported 00:08:19.508 Extended LBA Formats Supported: Supported 00:08:19.508 Flexible Data Placement Supported: Not Supported 00:08:19.508 00:08:19.508 Controller Memory Buffer Support 00:08:19.508 ================================ 00:08:19.508 Supported: No 00:08:19.508 00:08:19.508 Persistent Memory Region Support 00:08:19.508 ================================ 00:08:19.508 Supported: No 00:08:19.508 00:08:19.508 Admin Command Set Attributes 00:08:19.508 ============================ 00:08:19.508 Security Send/Receive: Not Supported 00:08:19.508 Format NVM: Supported 00:08:19.508 Firmware Activate/Download: Not Supported 00:08:19.508 Namespace Management: Supported 00:08:19.508 Device Self-Test: Not Supported 00:08:19.508 Directives: Supported 00:08:19.508 NVMe-MI: Not Supported 00:08:19.508 Virtualization Management: Not Supported 00:08:19.508 Doorbell Buffer Config: Supported 00:08:19.508 Get LBA Status Capability: Not Supported 00:08:19.508 Command & Feature Lockdown Capability: Not Supported 00:08:19.508 Abort Command Limit: 4 00:08:19.508 Async Event Request Limit: 4 00:08:19.508 Number of Firmware Slots: N/A 00:08:19.508 Firmware Slot 1 Read-Only: N/A 00:08:19.508 Firmware Activation Without Reset: N/A 00:08:19.508 Multiple Update Detection Support: N/A 00:08:19.508 Firmware Update Granularity: No Information Provided 00:08:19.508 Per-Namespace SMART Log: Yes 00:08:19.508 Asymmetric Namespace Access Log Page: Not Supported 00:08:19.508 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:19.508 Command Effects Log Page: Supported 00:08:19.508 Get Log Page Extended Data: Supported 00:08:19.508 Telemetry Log Pages: Not Supported 00:08:19.508 Persistent Event Log Pages: Not Supported 00:08:19.508 Supported Log Pages Log Page: May Support 00:08:19.508 Commands Supported & Effects Log Page: Not Supported 00:08:19.508 Feature Identifiers & Effects Log Page:May Support 00:08:19.508 NVMe-MI Commands & Effects Log Page: May Support 00:08:19.508 Data Area 4 for Telemetry Log: Not Supported 00:08:19.508 Error Log Page Entries Supported: 1 00:08:19.508 Keep Alive: Not Supported 00:08:19.508 00:08:19.508 NVM Command Set Attributes 00:08:19.508 ========================== 00:08:19.508 Submission Queue Entry Size 00:08:19.508 Max: 64 00:08:19.508 Min: 64 00:08:19.508 Completion Queue Entry Size 00:08:19.508 Max: 16 00:08:19.508 Min: 16 00:08:19.508 Number of Namespaces: 256 00:08:19.508 Compare Command: Supported 00:08:19.508 Write Uncorrectable Command: Not Supported 00:08:19.508 Dataset Management Command: Supported 00:08:19.508 Write Zeroes Command: Supported 00:08:19.508 Set Features Save Field: Supported 00:08:19.508 Reservations: Not Supported 00:08:19.508 Timestamp: Supported 00:08:19.508 Copy: Supported 00:08:19.508 Volatile Write Cache: Present 00:08:19.508 Atomic Write Unit (Normal): 1 00:08:19.508 Atomic Write Unit (PFail): 1 00:08:19.508 Atomic Compare & Write Unit: 1 00:08:19.508 Fused Compare & Write: Not Supported 00:08:19.508 Scatter-Gather List 00:08:19.508 SGL Command Set: Supported 00:08:19.508 SGL Keyed: Not Supported 00:08:19.508 SGL Bit Bucket Descriptor: Not Supported 00:08:19.508 SGL Metadata Pointer: Not Supported 00:08:19.508 Oversized SGL: Not Supported 00:08:19.508 SGL Metadata Address: Not Supported 00:08:19.508 SGL Offset: Not Supported 00:08:19.508 Transport SGL Data Block: Not Supported 00:08:19.508 Replay Protected Memory Block: Not Supported 00:08:19.508 00:08:19.508 Firmware Slot Information 00:08:19.508 ========================= 00:08:19.508 Active slot: 1 00:08:19.508 Slot 1 Firmware Revision: 1.0 00:08:19.508 00:08:19.508 00:08:19.508 Commands Supported and Effects 00:08:19.508 ============================== 00:08:19.508 Admin Commands 00:08:19.508 -------------- 00:08:19.508 Delete I/O Submission Queue (00h): Supported 00:08:19.508 Create I/O Submission Queue (01h): Supported 00:08:19.508 Get Log Page (02h): Supported 00:08:19.508 Delete I/O Completion Queue (04h): Supported 00:08:19.508 Create I/O Completion Queue (05h): Supported 00:08:19.508 Identify (06h): Supported 00:08:19.508 Abort (08h): Supported 00:08:19.508 Set Features (09h): Supported 00:08:19.508 Get Features (0Ah): Supported 00:08:19.508 Asynchronous Event Request (0Ch): Supported 00:08:19.508 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:19.508 Directive Send (19h): Supported 00:08:19.508 Directive Receive (1Ah): Supported 00:08:19.508 Virtualization Management (1Ch): Supported 00:08:19.508 Doorbell Buffer Config (7Ch): Supported 00:08:19.508 Format NVM (80h): Supported LBA-Change 00:08:19.508 I/O Commands 00:08:19.508 ------------ 00:08:19.508 Flush (00h): Supported LBA-Change 00:08:19.508 Write (01h): Supported LBA-Change 00:08:19.508 Read (02h): Supported 00:08:19.508 Compare (05h): Supported 00:08:19.508 Write Zeroes (08h): Supported LBA-Change 00:08:19.508 Dataset Management (09h): Supported LBA-Change 00:08:19.508 Unknown (0Ch): Supported 00:08:19.508 Unknown (12h): Supported 00:08:19.508 Copy (19h): Supported LBA-Change 00:08:19.508 Unknown (1Dh): Supported LBA-Change 00:08:19.508 00:08:19.508 Error Log 00:08:19.508 ========= 00:08:19.508 00:08:19.508 Arbitration 00:08:19.509 =========== 00:08:19.509 Arbitration Burst: no limit 00:08:19.509 00:08:19.509 Power Management 00:08:19.509 ================ 00:08:19.509 Number of Power States: 1 00:08:19.509 Current Power State: Power State #0 00:08:19.509 Power State #0: 00:08:19.509 Max Power: 25.00 W 00:08:19.509 Non-Operational State: Operational 00:08:19.509 Entry Latency: 16 microseconds 00:08:19.509 Exit Latency: 4 microseconds 00:08:19.509 Relative Read Throughput: 0 00:08:19.509 Relative Read Latency: 0 00:08:19.509 Relative Write Throughput: 0 00:08:19.509 Relative Write Latency: 0 00:08:19.509 Idle Power: Not Reported 00:08:19.509 Active Power: Not Reported 00:08:19.509 Non-Operational Permissive Mode: Not Supported 00:08:19.509 00:08:19.509 Health Information 00:08:19.509 ================== 00:08:19.509 Critical Warnings: 00:08:19.509 Available Spare Space: OK 00:08:19.509 Temperature: OK 00:08:19.509 Device Reliability: OK 00:08:19.509 Read Only: No 00:08:19.509 Volatile Memory Backup: OK 00:08:19.509 Current Temperature: 323 Kelvin (50 Celsius) 00:08:19.509 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:19.509 Available Spare: 0% 00:08:19.509 Available Spare Threshold: 0% 00:08:19.509 Life Percentage Used: 0% 00:08:19.509 Data Units Read: 2302 00:08:19.509 Data Units Written: 2090 00:08:19.509 Host Read Commands: 123527 00:08:19.509 Host Write Commands: 121796 00:08:19.509 Controller Busy Time: 0 minutes 00:08:19.509 Power Cycles: 0 00:08:19.509 Power On Hours: 0 hours 00:08:19.509 Unsafe Shutdowns: 0 00:08:19.509 Unrecoverable Media Errors: 0 00:08:19.509 Lifetime Error Log Entries: 0 00:08:19.509 Warning Temperature Time: 0 minutes 00:08:19.509 Critical Temperature Time: 0 minutes 00:08:19.509 00:08:19.509 Number of Queues 00:08:19.509 ================ 00:08:19.509 Number of I/O Submission Queues: 64 00:08:19.509 Number of I/O Completion Queues: 64 00:08:19.509 00:08:19.509 ZNS Specific Controller Data 00:08:19.509 ============================ 00:08:19.509 Zone Append Size Limit: 0 00:08:19.509 00:08:19.509 00:08:19.509 Active Namespaces 00:08:19.509 ================= 00:08:19.509 Namespace ID:1 00:08:19.509 Error Recovery Timeout: Unlimited 00:08:19.509 Command Set Identifier: NVM (00h) 00:08:19.509 Deallocate: Supported 00:08:19.509 Deallocated/Unwritten Error: Supported 00:08:19.509 Deallocated Read Value: All 0x00 00:08:19.509 Deallocate in Write Zeroes: Not Supported 00:08:19.509 Deallocated Guard Field: 0xFFFF 00:08:19.509 Flush: Supported 00:08:19.509 Reservation: Not Supported 00:08:19.509 Namespace Sharing Capabilities: Private 00:08:19.509 Size (in LBAs): 1048576 (4GiB) 00:08:19.509 Capacity (in LBAs): 1048576 (4GiB) 00:08:19.509 Utilization (in LBAs): 1048576 (4GiB) 00:08:19.509 Thin Provisioning: Not Supported 00:08:19.509 Per-NS Atomic Units: No 00:08:19.509 Maximum Single Source Range Length: 128 00:08:19.509 Maximum Copy Length: 128 00:08:19.509 Maximum Source Range Count: 128 00:08:19.509 NGUID/EUI64 Never Reused: No 00:08:19.509 Namespace Write Protected: No 00:08:19.509 Number of LBA Formats: 8 00:08:19.509 Current LBA Format: LBA Format #04 00:08:19.509 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:19.509 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:19.509 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:19.509 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:19.509 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:19.509 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:19.509 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:19.509 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:19.509 00:08:19.509 NVM Specific Namespace Data 00:08:19.509 =========================== 00:08:19.509 Logical Block Storage Tag Mask: 0 00:08:19.509 Protection Information Capabilities: 00:08:19.509 16b Guard Protection Information Storage Tag Support: No 00:08:19.509 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:19.509 Storage Tag Check Read Support: No 00:08:19.509 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:19.509 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:19.509 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:19.509 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:19.509 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:19.509 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:19.509 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:19.509 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:19.509 Namespace ID:2 00:08:19.509 Error Recovery Timeout: Unlimited 00:08:19.509 Command Set Identifier: NVM (00h) 00:08:19.509 Deallocate: Supported 00:08:19.509 Deallocated/Unwritten Error: Supported 00:08:19.509 Deallocated Read Value: All 0x00 00:08:19.509 Deallocate in Write Zeroes: Not Supported 00:08:19.509 Deallocated Guard Field: 0xFFFF 00:08:19.509 Flush: Supported 00:08:19.509 Reservation: Not Supported 00:08:19.509 Namespace Sharing Capabilities: Private 00:08:19.509 Size (in LBAs): 1048576 (4GiB) 00:08:19.509 Capacity (in LBAs): 1048576 (4GiB) 00:08:19.509 Utilization (in LBAs): 1048576 (4GiB) 00:08:19.509 Thin Provisioning: Not Supported 00:08:19.509 Per-NS Atomic Units: No 00:08:19.509 Maximum Single Source Range Length: 128 00:08:19.509 Maximum Copy Length: 128 00:08:19.509 Maximum Source Range Count: 128 00:08:19.509 NGUID/EUI64 Never Reused: No 00:08:19.509 Namespace Write Protected: No 00:08:19.509 Number of LBA Formats: 8 00:08:19.509 Current LBA Format: LBA Format #04 00:08:19.509 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:19.509 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:19.509 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:19.509 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:19.509 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:19.509 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:19.509 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:19.509 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:19.509 00:08:19.509 NVM Specific Namespace Data 00:08:19.509 =========================== 00:08:19.509 Logical Block Storage Tag Mask: 0 00:08:19.509 Protection Information Capabilities: 00:08:19.509 16b Guard Protection Information Storage Tag Support: No 00:08:19.509 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:19.509 Storage Tag Check Read Support: No 00:08:19.509 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:19.509 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:19.509 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:19.509 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:19.509 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:19.509 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:19.509 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:19.509 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:19.509 Namespace ID:3 00:08:19.509 Error Recovery Timeout: Unlimited 00:08:19.509 Command Set Identifier: NVM (00h) 00:08:19.509 Deallocate: Supported 00:08:19.509 Deallocated/Unwritten Error: Supported 00:08:19.509 Deallocated Read Value: All 0x00 00:08:19.509 Deallocate in Write Zeroes: Not Supported 00:08:19.509 Deallocated Guard Field: 0xFFFF 00:08:19.509 Flush: Supported 00:08:19.509 Reservation: Not Supported 00:08:19.509 Namespace Sharing Capabilities: Private 00:08:19.509 Size (in LBAs): 1048576 (4GiB) 00:08:19.509 Capacity (in LBAs): 1048576 (4GiB) 00:08:19.509 Utilization (in LBAs): 1048576 (4GiB) 00:08:19.509 Thin Provisioning: Not Supported 00:08:19.509 Per-NS Atomic Units: No 00:08:19.509 Maximum Single Source Range Length: 128 00:08:19.509 Maximum Copy Length: 128 00:08:19.509 Maximum Source Range Count: 128 00:08:19.509 NGUID/EUI64 Never Reused: No 00:08:19.509 Namespace Write Protected: No 00:08:19.509 Number of LBA Formats: 8 00:08:19.509 Current LBA Format: LBA Format #04 00:08:19.509 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:19.509 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:19.509 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:19.509 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:19.509 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:19.509 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:19.509 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:19.509 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:19.509 00:08:19.509 NVM Specific Namespace Data 00:08:19.509 =========================== 00:08:19.509 Logical Block Storage Tag Mask: 0 00:08:19.509 Protection Information Capabilities: 00:08:19.510 16b Guard Protection Information Storage Tag Support: No 00:08:19.510 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:19.510 Storage Tag Check Read Support: No 00:08:19.510 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:19.510 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:19.510 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:19.510 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:19.510 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:19.510 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:19.510 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:19.510 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:19.510 05:10:12 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:19.510 05:10:12 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:08:19.772 ===================================================== 00:08:19.772 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:19.772 ===================================================== 00:08:19.772 Controller Capabilities/Features 00:08:19.772 ================================ 00:08:19.772 Vendor ID: 1b36 00:08:19.772 Subsystem Vendor ID: 1af4 00:08:19.772 Serial Number: 12340 00:08:19.773 Model Number: QEMU NVMe Ctrl 00:08:19.773 Firmware Version: 8.0.0 00:08:19.773 Recommended Arb Burst: 6 00:08:19.773 IEEE OUI Identifier: 00 54 52 00:08:19.773 Multi-path I/O 00:08:19.773 May have multiple subsystem ports: No 00:08:19.773 May have multiple controllers: No 00:08:19.773 Associated with SR-IOV VF: No 00:08:19.773 Max Data Transfer Size: 524288 00:08:19.773 Max Number of Namespaces: 256 00:08:19.773 Max Number of I/O Queues: 64 00:08:19.773 NVMe Specification Version (VS): 1.4 00:08:19.773 NVMe Specification Version (Identify): 1.4 00:08:19.773 Maximum Queue Entries: 2048 00:08:19.773 Contiguous Queues Required: Yes 00:08:19.773 Arbitration Mechanisms Supported 00:08:19.773 Weighted Round Robin: Not Supported 00:08:19.773 Vendor Specific: Not Supported 00:08:19.773 Reset Timeout: 7500 ms 00:08:19.773 Doorbell Stride: 4 bytes 00:08:19.773 NVM Subsystem Reset: Not Supported 00:08:19.773 Command Sets Supported 00:08:19.773 NVM Command Set: Supported 00:08:19.773 Boot Partition: Not Supported 00:08:19.773 Memory Page Size Minimum: 4096 bytes 00:08:19.773 Memory Page Size Maximum: 65536 bytes 00:08:19.773 Persistent Memory Region: Not Supported 00:08:19.773 Optional Asynchronous Events Supported 00:08:19.773 Namespace Attribute Notices: Supported 00:08:19.773 Firmware Activation Notices: Not Supported 00:08:19.773 ANA Change Notices: Not Supported 00:08:19.773 PLE Aggregate Log Change Notices: Not Supported 00:08:19.773 LBA Status Info Alert Notices: Not Supported 00:08:19.773 EGE Aggregate Log Change Notices: Not Supported 00:08:19.773 Normal NVM Subsystem Shutdown event: Not Supported 00:08:19.773 Zone Descriptor Change Notices: Not Supported 00:08:19.773 Discovery Log Change Notices: Not Supported 00:08:19.773 Controller Attributes 00:08:19.773 128-bit Host Identifier: Not Supported 00:08:19.773 Non-Operational Permissive Mode: Not Supported 00:08:19.773 NVM Sets: Not Supported 00:08:19.773 Read Recovery Levels: Not Supported 00:08:19.773 Endurance Groups: Not Supported 00:08:19.773 Predictable Latency Mode: Not Supported 00:08:19.773 Traffic Based Keep ALive: Not Supported 00:08:19.773 Namespace Granularity: Not Supported 00:08:19.773 SQ Associations: Not Supported 00:08:19.773 UUID List: Not Supported 00:08:19.773 Multi-Domain Subsystem: Not Supported 00:08:19.773 Fixed Capacity Management: Not Supported 00:08:19.773 Variable Capacity Management: Not Supported 00:08:19.773 Delete Endurance Group: Not Supported 00:08:19.773 Delete NVM Set: Not Supported 00:08:19.773 Extended LBA Formats Supported: Supported 00:08:19.773 Flexible Data Placement Supported: Not Supported 00:08:19.773 00:08:19.773 Controller Memory Buffer Support 00:08:19.773 ================================ 00:08:19.773 Supported: No 00:08:19.773 00:08:19.773 Persistent Memory Region Support 00:08:19.773 ================================ 00:08:19.773 Supported: No 00:08:19.773 00:08:19.773 Admin Command Set Attributes 00:08:19.773 ============================ 00:08:19.773 Security Send/Receive: Not Supported 00:08:19.773 Format NVM: Supported 00:08:19.773 Firmware Activate/Download: Not Supported 00:08:19.773 Namespace Management: Supported 00:08:19.773 Device Self-Test: Not Supported 00:08:19.773 Directives: Supported 00:08:19.773 NVMe-MI: Not Supported 00:08:19.773 Virtualization Management: Not Supported 00:08:19.773 Doorbell Buffer Config: Supported 00:08:19.773 Get LBA Status Capability: Not Supported 00:08:19.773 Command & Feature Lockdown Capability: Not Supported 00:08:19.773 Abort Command Limit: 4 00:08:19.773 Async Event Request Limit: 4 00:08:19.773 Number of Firmware Slots: N/A 00:08:19.773 Firmware Slot 1 Read-Only: N/A 00:08:19.773 Firmware Activation Without Reset: N/A 00:08:19.773 Multiple Update Detection Support: N/A 00:08:19.773 Firmware Update Granularity: No Information Provided 00:08:19.773 Per-Namespace SMART Log: Yes 00:08:19.773 Asymmetric Namespace Access Log Page: Not Supported 00:08:19.773 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:19.773 Command Effects Log Page: Supported 00:08:19.773 Get Log Page Extended Data: Supported 00:08:19.773 Telemetry Log Pages: Not Supported 00:08:19.773 Persistent Event Log Pages: Not Supported 00:08:19.773 Supported Log Pages Log Page: May Support 00:08:19.773 Commands Supported & Effects Log Page: Not Supported 00:08:19.773 Feature Identifiers & Effects Log Page:May Support 00:08:19.773 NVMe-MI Commands & Effects Log Page: May Support 00:08:19.773 Data Area 4 for Telemetry Log: Not Supported 00:08:19.773 Error Log Page Entries Supported: 1 00:08:19.773 Keep Alive: Not Supported 00:08:19.773 00:08:19.773 NVM Command Set Attributes 00:08:19.773 ========================== 00:08:19.773 Submission Queue Entry Size 00:08:19.773 Max: 64 00:08:19.773 Min: 64 00:08:19.773 Completion Queue Entry Size 00:08:19.773 Max: 16 00:08:19.773 Min: 16 00:08:19.773 Number of Namespaces: 256 00:08:19.773 Compare Command: Supported 00:08:19.773 Write Uncorrectable Command: Not Supported 00:08:19.773 Dataset Management Command: Supported 00:08:19.773 Write Zeroes Command: Supported 00:08:19.773 Set Features Save Field: Supported 00:08:19.773 Reservations: Not Supported 00:08:19.773 Timestamp: Supported 00:08:19.773 Copy: Supported 00:08:19.773 Volatile Write Cache: Present 00:08:19.773 Atomic Write Unit (Normal): 1 00:08:19.773 Atomic Write Unit (PFail): 1 00:08:19.773 Atomic Compare & Write Unit: 1 00:08:19.773 Fused Compare & Write: Not Supported 00:08:19.773 Scatter-Gather List 00:08:19.773 SGL Command Set: Supported 00:08:19.773 SGL Keyed: Not Supported 00:08:19.773 SGL Bit Bucket Descriptor: Not Supported 00:08:19.773 SGL Metadata Pointer: Not Supported 00:08:19.773 Oversized SGL: Not Supported 00:08:19.773 SGL Metadata Address: Not Supported 00:08:19.773 SGL Offset: Not Supported 00:08:19.773 Transport SGL Data Block: Not Supported 00:08:19.773 Replay Protected Memory Block: Not Supported 00:08:19.773 00:08:19.773 Firmware Slot Information 00:08:19.773 ========================= 00:08:19.773 Active slot: 1 00:08:19.773 Slot 1 Firmware Revision: 1.0 00:08:19.773 00:08:19.773 00:08:19.773 Commands Supported and Effects 00:08:19.773 ============================== 00:08:19.773 Admin Commands 00:08:19.773 -------------- 00:08:19.773 Delete I/O Submission Queue (00h): Supported 00:08:19.773 Create I/O Submission Queue (01h): Supported 00:08:19.773 Get Log Page (02h): Supported 00:08:19.773 Delete I/O Completion Queue (04h): Supported 00:08:19.773 Create I/O Completion Queue (05h): Supported 00:08:19.773 Identify (06h): Supported 00:08:19.773 Abort (08h): Supported 00:08:19.773 Set Features (09h): Supported 00:08:19.773 Get Features (0Ah): Supported 00:08:19.773 Asynchronous Event Request (0Ch): Supported 00:08:19.773 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:19.773 Directive Send (19h): Supported 00:08:19.773 Directive Receive (1Ah): Supported 00:08:19.773 Virtualization Management (1Ch): Supported 00:08:19.773 Doorbell Buffer Config (7Ch): Supported 00:08:19.773 Format NVM (80h): Supported LBA-Change 00:08:19.773 I/O Commands 00:08:19.773 ------------ 00:08:19.773 Flush (00h): Supported LBA-Change 00:08:19.773 Write (01h): Supported LBA-Change 00:08:19.773 Read (02h): Supported 00:08:19.773 Compare (05h): Supported 00:08:19.773 Write Zeroes (08h): Supported LBA-Change 00:08:19.773 Dataset Management (09h): Supported LBA-Change 00:08:19.773 Unknown (0Ch): Supported 00:08:19.773 Unknown (12h): Supported 00:08:19.773 Copy (19h): Supported LBA-Change 00:08:19.773 Unknown (1Dh): Supported LBA-Change 00:08:19.773 00:08:19.773 Error Log 00:08:19.773 ========= 00:08:19.773 00:08:19.773 Arbitration 00:08:19.773 =========== 00:08:19.773 Arbitration Burst: no limit 00:08:19.773 00:08:19.773 Power Management 00:08:19.773 ================ 00:08:19.773 Number of Power States: 1 00:08:19.773 Current Power State: Power State #0 00:08:19.773 Power State #0: 00:08:19.773 Max Power: 25.00 W 00:08:19.773 Non-Operational State: Operational 00:08:19.773 Entry Latency: 16 microseconds 00:08:19.773 Exit Latency: 4 microseconds 00:08:19.773 Relative Read Throughput: 0 00:08:19.773 Relative Read Latency: 0 00:08:19.773 Relative Write Throughput: 0 00:08:19.773 Relative Write Latency: 0 00:08:19.773 Idle Power: Not Reported 00:08:19.773 Active Power: Not Reported 00:08:19.773 Non-Operational Permissive Mode: Not Supported 00:08:19.773 00:08:19.773 Health Information 00:08:19.773 ================== 00:08:19.773 Critical Warnings: 00:08:19.773 Available Spare Space: OK 00:08:19.773 Temperature: OK 00:08:19.774 Device Reliability: OK 00:08:19.774 Read Only: No 00:08:19.774 Volatile Memory Backup: OK 00:08:19.774 Current Temperature: 323 Kelvin (50 Celsius) 00:08:19.774 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:19.774 Available Spare: 0% 00:08:19.774 Available Spare Threshold: 0% 00:08:19.774 Life Percentage Used: 0% 00:08:19.774 Data Units Read: 691 00:08:19.774 Data Units Written: 620 00:08:19.774 Host Read Commands: 40221 00:08:19.774 Host Write Commands: 40007 00:08:19.774 Controller Busy Time: 0 minutes 00:08:19.774 Power Cycles: 0 00:08:19.774 Power On Hours: 0 hours 00:08:19.774 Unsafe Shutdowns: 0 00:08:19.774 Unrecoverable Media Errors: 0 00:08:19.774 Lifetime Error Log Entries: 0 00:08:19.774 Warning Temperature Time: 0 minutes 00:08:19.774 Critical Temperature Time: 0 minutes 00:08:19.774 00:08:19.774 Number of Queues 00:08:19.774 ================ 00:08:19.774 Number of I/O Submission Queues: 64 00:08:19.774 Number of I/O Completion Queues: 64 00:08:19.774 00:08:19.774 ZNS Specific Controller Data 00:08:19.774 ============================ 00:08:19.774 Zone Append Size Limit: 0 00:08:19.774 00:08:19.774 00:08:19.774 Active Namespaces 00:08:19.774 ================= 00:08:19.774 Namespace ID:1 00:08:19.774 Error Recovery Timeout: Unlimited 00:08:19.774 Command Set Identifier: NVM (00h) 00:08:19.774 Deallocate: Supported 00:08:19.774 Deallocated/Unwritten Error: Supported 00:08:19.774 Deallocated Read Value: All 0x00 00:08:19.774 Deallocate in Write Zeroes: Not Supported 00:08:19.774 Deallocated Guard Field: 0xFFFF 00:08:19.774 Flush: Supported 00:08:19.774 Reservation: Not Supported 00:08:19.774 Metadata Transferred as: Separate Metadata Buffer 00:08:19.774 Namespace Sharing Capabilities: Private 00:08:19.774 Size (in LBAs): 1548666 (5GiB) 00:08:19.774 Capacity (in LBAs): 1548666 (5GiB) 00:08:19.774 Utilization (in LBAs): 1548666 (5GiB) 00:08:19.774 Thin Provisioning: Not Supported 00:08:19.774 Per-NS Atomic Units: No 00:08:19.774 Maximum Single Source Range Length: 128 00:08:19.774 Maximum Copy Length: 128 00:08:19.774 Maximum Source Range Count: 128 00:08:19.774 NGUID/EUI64 Never Reused: No 00:08:19.774 Namespace Write Protected: No 00:08:19.774 Number of LBA Formats: 8 00:08:19.774 Current LBA Format: LBA Format #07 00:08:19.774 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:19.774 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:19.774 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:19.774 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:19.774 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:19.774 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:19.774 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:19.774 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:19.774 00:08:19.774 NVM Specific Namespace Data 00:08:19.774 =========================== 00:08:19.774 Logical Block Storage Tag Mask: 0 00:08:19.774 Protection Information Capabilities: 00:08:19.774 16b Guard Protection Information Storage Tag Support: No 00:08:19.774 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:19.774 Storage Tag Check Read Support: No 00:08:19.774 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:19.774 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:19.774 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:19.774 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:19.774 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:19.774 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:19.774 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:19.774 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:19.774 05:10:12 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:19.774 05:10:12 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:08:20.037 ===================================================== 00:08:20.037 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:20.037 ===================================================== 00:08:20.037 Controller Capabilities/Features 00:08:20.037 ================================ 00:08:20.037 Vendor ID: 1b36 00:08:20.037 Subsystem Vendor ID: 1af4 00:08:20.037 Serial Number: 12341 00:08:20.037 Model Number: QEMU NVMe Ctrl 00:08:20.037 Firmware Version: 8.0.0 00:08:20.037 Recommended Arb Burst: 6 00:08:20.037 IEEE OUI Identifier: 00 54 52 00:08:20.037 Multi-path I/O 00:08:20.037 May have multiple subsystem ports: No 00:08:20.037 May have multiple controllers: No 00:08:20.037 Associated with SR-IOV VF: No 00:08:20.037 Max Data Transfer Size: 524288 00:08:20.037 Max Number of Namespaces: 256 00:08:20.038 Max Number of I/O Queues: 64 00:08:20.038 NVMe Specification Version (VS): 1.4 00:08:20.038 NVMe Specification Version (Identify): 1.4 00:08:20.038 Maximum Queue Entries: 2048 00:08:20.038 Contiguous Queues Required: Yes 00:08:20.038 Arbitration Mechanisms Supported 00:08:20.038 Weighted Round Robin: Not Supported 00:08:20.038 Vendor Specific: Not Supported 00:08:20.038 Reset Timeout: 7500 ms 00:08:20.038 Doorbell Stride: 4 bytes 00:08:20.038 NVM Subsystem Reset: Not Supported 00:08:20.038 Command Sets Supported 00:08:20.038 NVM Command Set: Supported 00:08:20.038 Boot Partition: Not Supported 00:08:20.038 Memory Page Size Minimum: 4096 bytes 00:08:20.038 Memory Page Size Maximum: 65536 bytes 00:08:20.038 Persistent Memory Region: Not Supported 00:08:20.038 Optional Asynchronous Events Supported 00:08:20.038 Namespace Attribute Notices: Supported 00:08:20.038 Firmware Activation Notices: Not Supported 00:08:20.038 ANA Change Notices: Not Supported 00:08:20.038 PLE Aggregate Log Change Notices: Not Supported 00:08:20.038 LBA Status Info Alert Notices: Not Supported 00:08:20.038 EGE Aggregate Log Change Notices: Not Supported 00:08:20.038 Normal NVM Subsystem Shutdown event: Not Supported 00:08:20.038 Zone Descriptor Change Notices: Not Supported 00:08:20.038 Discovery Log Change Notices: Not Supported 00:08:20.038 Controller Attributes 00:08:20.038 128-bit Host Identifier: Not Supported 00:08:20.038 Non-Operational Permissive Mode: Not Supported 00:08:20.038 NVM Sets: Not Supported 00:08:20.038 Read Recovery Levels: Not Supported 00:08:20.038 Endurance Groups: Not Supported 00:08:20.038 Predictable Latency Mode: Not Supported 00:08:20.038 Traffic Based Keep ALive: Not Supported 00:08:20.038 Namespace Granularity: Not Supported 00:08:20.038 SQ Associations: Not Supported 00:08:20.038 UUID List: Not Supported 00:08:20.038 Multi-Domain Subsystem: Not Supported 00:08:20.038 Fixed Capacity Management: Not Supported 00:08:20.038 Variable Capacity Management: Not Supported 00:08:20.038 Delete Endurance Group: Not Supported 00:08:20.038 Delete NVM Set: Not Supported 00:08:20.038 Extended LBA Formats Supported: Supported 00:08:20.038 Flexible Data Placement Supported: Not Supported 00:08:20.038 00:08:20.038 Controller Memory Buffer Support 00:08:20.038 ================================ 00:08:20.038 Supported: No 00:08:20.038 00:08:20.038 Persistent Memory Region Support 00:08:20.038 ================================ 00:08:20.038 Supported: No 00:08:20.038 00:08:20.038 Admin Command Set Attributes 00:08:20.038 ============================ 00:08:20.038 Security Send/Receive: Not Supported 00:08:20.038 Format NVM: Supported 00:08:20.038 Firmware Activate/Download: Not Supported 00:08:20.038 Namespace Management: Supported 00:08:20.038 Device Self-Test: Not Supported 00:08:20.038 Directives: Supported 00:08:20.038 NVMe-MI: Not Supported 00:08:20.038 Virtualization Management: Not Supported 00:08:20.038 Doorbell Buffer Config: Supported 00:08:20.038 Get LBA Status Capability: Not Supported 00:08:20.038 Command & Feature Lockdown Capability: Not Supported 00:08:20.038 Abort Command Limit: 4 00:08:20.038 Async Event Request Limit: 4 00:08:20.038 Number of Firmware Slots: N/A 00:08:20.038 Firmware Slot 1 Read-Only: N/A 00:08:20.038 Firmware Activation Without Reset: N/A 00:08:20.038 Multiple Update Detection Support: N/A 00:08:20.038 Firmware Update Granularity: No Information Provided 00:08:20.038 Per-Namespace SMART Log: Yes 00:08:20.038 Asymmetric Namespace Access Log Page: Not Supported 00:08:20.038 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:20.038 Command Effects Log Page: Supported 00:08:20.038 Get Log Page Extended Data: Supported 00:08:20.038 Telemetry Log Pages: Not Supported 00:08:20.038 Persistent Event Log Pages: Not Supported 00:08:20.038 Supported Log Pages Log Page: May Support 00:08:20.038 Commands Supported & Effects Log Page: Not Supported 00:08:20.038 Feature Identifiers & Effects Log Page:May Support 00:08:20.038 NVMe-MI Commands & Effects Log Page: May Support 00:08:20.038 Data Area 4 for Telemetry Log: Not Supported 00:08:20.038 Error Log Page Entries Supported: 1 00:08:20.038 Keep Alive: Not Supported 00:08:20.038 00:08:20.038 NVM Command Set Attributes 00:08:20.038 ========================== 00:08:20.038 Submission Queue Entry Size 00:08:20.038 Max: 64 00:08:20.038 Min: 64 00:08:20.038 Completion Queue Entry Size 00:08:20.038 Max: 16 00:08:20.038 Min: 16 00:08:20.038 Number of Namespaces: 256 00:08:20.038 Compare Command: Supported 00:08:20.038 Write Uncorrectable Command: Not Supported 00:08:20.038 Dataset Management Command: Supported 00:08:20.038 Write Zeroes Command: Supported 00:08:20.038 Set Features Save Field: Supported 00:08:20.038 Reservations: Not Supported 00:08:20.038 Timestamp: Supported 00:08:20.038 Copy: Supported 00:08:20.038 Volatile Write Cache: Present 00:08:20.038 Atomic Write Unit (Normal): 1 00:08:20.038 Atomic Write Unit (PFail): 1 00:08:20.038 Atomic Compare & Write Unit: 1 00:08:20.038 Fused Compare & Write: Not Supported 00:08:20.038 Scatter-Gather List 00:08:20.038 SGL Command Set: Supported 00:08:20.038 SGL Keyed: Not Supported 00:08:20.038 SGL Bit Bucket Descriptor: Not Supported 00:08:20.038 SGL Metadata Pointer: Not Supported 00:08:20.038 Oversized SGL: Not Supported 00:08:20.038 SGL Metadata Address: Not Supported 00:08:20.038 SGL Offset: Not Supported 00:08:20.038 Transport SGL Data Block: Not Supported 00:08:20.038 Replay Protected Memory Block: Not Supported 00:08:20.038 00:08:20.038 Firmware Slot Information 00:08:20.038 ========================= 00:08:20.038 Active slot: 1 00:08:20.038 Slot 1 Firmware Revision: 1.0 00:08:20.038 00:08:20.038 00:08:20.038 Commands Supported and Effects 00:08:20.038 ============================== 00:08:20.038 Admin Commands 00:08:20.038 -------------- 00:08:20.038 Delete I/O Submission Queue (00h): Supported 00:08:20.038 Create I/O Submission Queue (01h): Supported 00:08:20.038 Get Log Page (02h): Supported 00:08:20.038 Delete I/O Completion Queue (04h): Supported 00:08:20.038 Create I/O Completion Queue (05h): Supported 00:08:20.038 Identify (06h): Supported 00:08:20.038 Abort (08h): Supported 00:08:20.038 Set Features (09h): Supported 00:08:20.038 Get Features (0Ah): Supported 00:08:20.038 Asynchronous Event Request (0Ch): Supported 00:08:20.038 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:20.038 Directive Send (19h): Supported 00:08:20.038 Directive Receive (1Ah): Supported 00:08:20.038 Virtualization Management (1Ch): Supported 00:08:20.038 Doorbell Buffer Config (7Ch): Supported 00:08:20.038 Format NVM (80h): Supported LBA-Change 00:08:20.038 I/O Commands 00:08:20.038 ------------ 00:08:20.038 Flush (00h): Supported LBA-Change 00:08:20.038 Write (01h): Supported LBA-Change 00:08:20.038 Read (02h): Supported 00:08:20.038 Compare (05h): Supported 00:08:20.038 Write Zeroes (08h): Supported LBA-Change 00:08:20.038 Dataset Management (09h): Supported LBA-Change 00:08:20.038 Unknown (0Ch): Supported 00:08:20.038 Unknown (12h): Supported 00:08:20.038 Copy (19h): Supported LBA-Change 00:08:20.038 Unknown (1Dh): Supported LBA-Change 00:08:20.038 00:08:20.038 Error Log 00:08:20.038 ========= 00:08:20.038 00:08:20.038 Arbitration 00:08:20.038 =========== 00:08:20.038 Arbitration Burst: no limit 00:08:20.038 00:08:20.038 Power Management 00:08:20.038 ================ 00:08:20.038 Number of Power States: 1 00:08:20.038 Current Power State: Power State #0 00:08:20.038 Power State #0: 00:08:20.038 Max Power: 25.00 W 00:08:20.038 Non-Operational State: Operational 00:08:20.038 Entry Latency: 16 microseconds 00:08:20.038 Exit Latency: 4 microseconds 00:08:20.038 Relative Read Throughput: 0 00:08:20.038 Relative Read Latency: 0 00:08:20.038 Relative Write Throughput: 0 00:08:20.038 Relative Write Latency: 0 00:08:20.038 Idle Power: Not Reported 00:08:20.038 Active Power: Not Reported 00:08:20.038 Non-Operational Permissive Mode: Not Supported 00:08:20.038 00:08:20.038 Health Information 00:08:20.038 ================== 00:08:20.038 Critical Warnings: 00:08:20.038 Available Spare Space: OK 00:08:20.038 Temperature: OK 00:08:20.038 Device Reliability: OK 00:08:20.038 Read Only: No 00:08:20.038 Volatile Memory Backup: OK 00:08:20.038 Current Temperature: 323 Kelvin (50 Celsius) 00:08:20.038 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:20.038 Available Spare: 0% 00:08:20.038 Available Spare Threshold: 0% 00:08:20.038 Life Percentage Used: 0% 00:08:20.038 Data Units Read: 1065 00:08:20.038 Data Units Written: 930 00:08:20.038 Host Read Commands: 60490 00:08:20.039 Host Write Commands: 59234 00:08:20.039 Controller Busy Time: 0 minutes 00:08:20.039 Power Cycles: 0 00:08:20.039 Power On Hours: 0 hours 00:08:20.039 Unsafe Shutdowns: 0 00:08:20.039 Unrecoverable Media Errors: 0 00:08:20.039 Lifetime Error Log Entries: 0 00:08:20.039 Warning Temperature Time: 0 minutes 00:08:20.039 Critical Temperature Time: 0 minutes 00:08:20.039 00:08:20.039 Number of Queues 00:08:20.039 ================ 00:08:20.039 Number of I/O Submission Queues: 64 00:08:20.039 Number of I/O Completion Queues: 64 00:08:20.039 00:08:20.039 ZNS Specific Controller Data 00:08:20.039 ============================ 00:08:20.039 Zone Append Size Limit: 0 00:08:20.039 00:08:20.039 00:08:20.039 Active Namespaces 00:08:20.039 ================= 00:08:20.039 Namespace ID:1 00:08:20.039 Error Recovery Timeout: Unlimited 00:08:20.039 Command Set Identifier: NVM (00h) 00:08:20.039 Deallocate: Supported 00:08:20.039 Deallocated/Unwritten Error: Supported 00:08:20.039 Deallocated Read Value: All 0x00 00:08:20.039 Deallocate in Write Zeroes: Not Supported 00:08:20.039 Deallocated Guard Field: 0xFFFF 00:08:20.039 Flush: Supported 00:08:20.039 Reservation: Not Supported 00:08:20.039 Namespace Sharing Capabilities: Private 00:08:20.039 Size (in LBAs): 1310720 (5GiB) 00:08:20.039 Capacity (in LBAs): 1310720 (5GiB) 00:08:20.039 Utilization (in LBAs): 1310720 (5GiB) 00:08:20.039 Thin Provisioning: Not Supported 00:08:20.039 Per-NS Atomic Units: No 00:08:20.039 Maximum Single Source Range Length: 128 00:08:20.039 Maximum Copy Length: 128 00:08:20.039 Maximum Source Range Count: 128 00:08:20.039 NGUID/EUI64 Never Reused: No 00:08:20.039 Namespace Write Protected: No 00:08:20.039 Number of LBA Formats: 8 00:08:20.039 Current LBA Format: LBA Format #04 00:08:20.039 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:20.039 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:20.039 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:20.039 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:20.039 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:20.039 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:20.039 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:20.039 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:20.039 00:08:20.039 NVM Specific Namespace Data 00:08:20.039 =========================== 00:08:20.039 Logical Block Storage Tag Mask: 0 00:08:20.039 Protection Information Capabilities: 00:08:20.039 16b Guard Protection Information Storage Tag Support: No 00:08:20.039 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:20.039 Storage Tag Check Read Support: No 00:08:20.039 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.039 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.039 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.039 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.039 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.039 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.039 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.039 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.039 05:10:13 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:20.039 05:10:13 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:08:20.039 ===================================================== 00:08:20.039 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:20.039 ===================================================== 00:08:20.039 Controller Capabilities/Features 00:08:20.039 ================================ 00:08:20.039 Vendor ID: 1b36 00:08:20.039 Subsystem Vendor ID: 1af4 00:08:20.039 Serial Number: 12342 00:08:20.039 Model Number: QEMU NVMe Ctrl 00:08:20.039 Firmware Version: 8.0.0 00:08:20.039 Recommended Arb Burst: 6 00:08:20.039 IEEE OUI Identifier: 00 54 52 00:08:20.039 Multi-path I/O 00:08:20.039 May have multiple subsystem ports: No 00:08:20.039 May have multiple controllers: No 00:08:20.039 Associated with SR-IOV VF: No 00:08:20.039 Max Data Transfer Size: 524288 00:08:20.039 Max Number of Namespaces: 256 00:08:20.039 Max Number of I/O Queues: 64 00:08:20.039 NVMe Specification Version (VS): 1.4 00:08:20.039 NVMe Specification Version (Identify): 1.4 00:08:20.039 Maximum Queue Entries: 2048 00:08:20.039 Contiguous Queues Required: Yes 00:08:20.039 Arbitration Mechanisms Supported 00:08:20.039 Weighted Round Robin: Not Supported 00:08:20.039 Vendor Specific: Not Supported 00:08:20.039 Reset Timeout: 7500 ms 00:08:20.039 Doorbell Stride: 4 bytes 00:08:20.039 NVM Subsystem Reset: Not Supported 00:08:20.039 Command Sets Supported 00:08:20.039 NVM Command Set: Supported 00:08:20.039 Boot Partition: Not Supported 00:08:20.039 Memory Page Size Minimum: 4096 bytes 00:08:20.039 Memory Page Size Maximum: 65536 bytes 00:08:20.039 Persistent Memory Region: Not Supported 00:08:20.039 Optional Asynchronous Events Supported 00:08:20.039 Namespace Attribute Notices: Supported 00:08:20.039 Firmware Activation Notices: Not Supported 00:08:20.039 ANA Change Notices: Not Supported 00:08:20.039 PLE Aggregate Log Change Notices: Not Supported 00:08:20.039 LBA Status Info Alert Notices: Not Supported 00:08:20.039 EGE Aggregate Log Change Notices: Not Supported 00:08:20.039 Normal NVM Subsystem Shutdown event: Not Supported 00:08:20.039 Zone Descriptor Change Notices: Not Supported 00:08:20.039 Discovery Log Change Notices: Not Supported 00:08:20.039 Controller Attributes 00:08:20.039 128-bit Host Identifier: Not Supported 00:08:20.039 Non-Operational Permissive Mode: Not Supported 00:08:20.039 NVM Sets: Not Supported 00:08:20.039 Read Recovery Levels: Not Supported 00:08:20.039 Endurance Groups: Not Supported 00:08:20.039 Predictable Latency Mode: Not Supported 00:08:20.039 Traffic Based Keep ALive: Not Supported 00:08:20.039 Namespace Granularity: Not Supported 00:08:20.039 SQ Associations: Not Supported 00:08:20.039 UUID List: Not Supported 00:08:20.039 Multi-Domain Subsystem: Not Supported 00:08:20.039 Fixed Capacity Management: Not Supported 00:08:20.039 Variable Capacity Management: Not Supported 00:08:20.039 Delete Endurance Group: Not Supported 00:08:20.039 Delete NVM Set: Not Supported 00:08:20.039 Extended LBA Formats Supported: Supported 00:08:20.039 Flexible Data Placement Supported: Not Supported 00:08:20.039 00:08:20.039 Controller Memory Buffer Support 00:08:20.039 ================================ 00:08:20.039 Supported: No 00:08:20.039 00:08:20.039 Persistent Memory Region Support 00:08:20.039 ================================ 00:08:20.039 Supported: No 00:08:20.039 00:08:20.039 Admin Command Set Attributes 00:08:20.039 ============================ 00:08:20.039 Security Send/Receive: Not Supported 00:08:20.039 Format NVM: Supported 00:08:20.039 Firmware Activate/Download: Not Supported 00:08:20.039 Namespace Management: Supported 00:08:20.039 Device Self-Test: Not Supported 00:08:20.039 Directives: Supported 00:08:20.039 NVMe-MI: Not Supported 00:08:20.039 Virtualization Management: Not Supported 00:08:20.039 Doorbell Buffer Config: Supported 00:08:20.039 Get LBA Status Capability: Not Supported 00:08:20.039 Command & Feature Lockdown Capability: Not Supported 00:08:20.039 Abort Command Limit: 4 00:08:20.039 Async Event Request Limit: 4 00:08:20.039 Number of Firmware Slots: N/A 00:08:20.039 Firmware Slot 1 Read-Only: N/A 00:08:20.039 Firmware Activation Without Reset: N/A 00:08:20.039 Multiple Update Detection Support: N/A 00:08:20.039 Firmware Update Granularity: No Information Provided 00:08:20.039 Per-Namespace SMART Log: Yes 00:08:20.039 Asymmetric Namespace Access Log Page: Not Supported 00:08:20.039 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:20.039 Command Effects Log Page: Supported 00:08:20.039 Get Log Page Extended Data: Supported 00:08:20.039 Telemetry Log Pages: Not Supported 00:08:20.039 Persistent Event Log Pages: Not Supported 00:08:20.039 Supported Log Pages Log Page: May Support 00:08:20.039 Commands Supported & Effects Log Page: Not Supported 00:08:20.039 Feature Identifiers & Effects Log Page:May Support 00:08:20.039 NVMe-MI Commands & Effects Log Page: May Support 00:08:20.039 Data Area 4 for Telemetry Log: Not Supported 00:08:20.039 Error Log Page Entries Supported: 1 00:08:20.039 Keep Alive: Not Supported 00:08:20.039 00:08:20.039 NVM Command Set Attributes 00:08:20.039 ========================== 00:08:20.039 Submission Queue Entry Size 00:08:20.039 Max: 64 00:08:20.039 Min: 64 00:08:20.039 Completion Queue Entry Size 00:08:20.039 Max: 16 00:08:20.039 Min: 16 00:08:20.039 Number of Namespaces: 256 00:08:20.039 Compare Command: Supported 00:08:20.039 Write Uncorrectable Command: Not Supported 00:08:20.039 Dataset Management Command: Supported 00:08:20.040 Write Zeroes Command: Supported 00:08:20.040 Set Features Save Field: Supported 00:08:20.040 Reservations: Not Supported 00:08:20.040 Timestamp: Supported 00:08:20.040 Copy: Supported 00:08:20.040 Volatile Write Cache: Present 00:08:20.040 Atomic Write Unit (Normal): 1 00:08:20.040 Atomic Write Unit (PFail): 1 00:08:20.040 Atomic Compare & Write Unit: 1 00:08:20.040 Fused Compare & Write: Not Supported 00:08:20.040 Scatter-Gather List 00:08:20.040 SGL Command Set: Supported 00:08:20.040 SGL Keyed: Not Supported 00:08:20.040 SGL Bit Bucket Descriptor: Not Supported 00:08:20.040 SGL Metadata Pointer: Not Supported 00:08:20.040 Oversized SGL: Not Supported 00:08:20.040 SGL Metadata Address: Not Supported 00:08:20.040 SGL Offset: Not Supported 00:08:20.040 Transport SGL Data Block: Not Supported 00:08:20.040 Replay Protected Memory Block: Not Supported 00:08:20.040 00:08:20.040 Firmware Slot Information 00:08:20.040 ========================= 00:08:20.040 Active slot: 1 00:08:20.040 Slot 1 Firmware Revision: 1.0 00:08:20.040 00:08:20.040 00:08:20.040 Commands Supported and Effects 00:08:20.040 ============================== 00:08:20.040 Admin Commands 00:08:20.040 -------------- 00:08:20.040 Delete I/O Submission Queue (00h): Supported 00:08:20.040 Create I/O Submission Queue (01h): Supported 00:08:20.040 Get Log Page (02h): Supported 00:08:20.040 Delete I/O Completion Queue (04h): Supported 00:08:20.040 Create I/O Completion Queue (05h): Supported 00:08:20.040 Identify (06h): Supported 00:08:20.040 Abort (08h): Supported 00:08:20.040 Set Features (09h): Supported 00:08:20.040 Get Features (0Ah): Supported 00:08:20.040 Asynchronous Event Request (0Ch): Supported 00:08:20.040 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:20.040 Directive Send (19h): Supported 00:08:20.040 Directive Receive (1Ah): Supported 00:08:20.040 Virtualization Management (1Ch): Supported 00:08:20.040 Doorbell Buffer Config (7Ch): Supported 00:08:20.040 Format NVM (80h): Supported LBA-Change 00:08:20.040 I/O Commands 00:08:20.040 ------------ 00:08:20.040 Flush (00h): Supported LBA-Change 00:08:20.040 Write (01h): Supported LBA-Change 00:08:20.040 Read (02h): Supported 00:08:20.040 Compare (05h): Supported 00:08:20.040 Write Zeroes (08h): Supported LBA-Change 00:08:20.040 Dataset Management (09h): Supported LBA-Change 00:08:20.040 Unknown (0Ch): Supported 00:08:20.040 Unknown (12h): Supported 00:08:20.040 Copy (19h): Supported LBA-Change 00:08:20.040 Unknown (1Dh): Supported LBA-Change 00:08:20.040 00:08:20.040 Error Log 00:08:20.040 ========= 00:08:20.040 00:08:20.040 Arbitration 00:08:20.040 =========== 00:08:20.040 Arbitration Burst: no limit 00:08:20.040 00:08:20.040 Power Management 00:08:20.040 ================ 00:08:20.040 Number of Power States: 1 00:08:20.040 Current Power State: Power State #0 00:08:20.040 Power State #0: 00:08:20.040 Max Power: 25.00 W 00:08:20.040 Non-Operational State: Operational 00:08:20.040 Entry Latency: 16 microseconds 00:08:20.040 Exit Latency: 4 microseconds 00:08:20.040 Relative Read Throughput: 0 00:08:20.040 Relative Read Latency: 0 00:08:20.040 Relative Write Throughput: 0 00:08:20.040 Relative Write Latency: 0 00:08:20.040 Idle Power: Not Reported 00:08:20.040 Active Power: Not Reported 00:08:20.040 Non-Operational Permissive Mode: Not Supported 00:08:20.040 00:08:20.040 Health Information 00:08:20.040 ================== 00:08:20.040 Critical Warnings: 00:08:20.040 Available Spare Space: OK 00:08:20.040 Temperature: OK 00:08:20.040 Device Reliability: OK 00:08:20.040 Read Only: No 00:08:20.040 Volatile Memory Backup: OK 00:08:20.040 Current Temperature: 323 Kelvin (50 Celsius) 00:08:20.040 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:20.040 Available Spare: 0% 00:08:20.040 Available Spare Threshold: 0% 00:08:20.040 Life Percentage Used: 0% 00:08:20.040 Data Units Read: 2302 00:08:20.040 Data Units Written: 2090 00:08:20.040 Host Read Commands: 123527 00:08:20.040 Host Write Commands: 121796 00:08:20.040 Controller Busy Time: 0 minutes 00:08:20.040 Power Cycles: 0 00:08:20.040 Power On Hours: 0 hours 00:08:20.040 Unsafe Shutdowns: 0 00:08:20.040 Unrecoverable Media Errors: 0 00:08:20.040 Lifetime Error Log Entries: 0 00:08:20.040 Warning Temperature Time: 0 minutes 00:08:20.040 Critical Temperature Time: 0 minutes 00:08:20.040 00:08:20.040 Number of Queues 00:08:20.040 ================ 00:08:20.040 Number of I/O Submission Queues: 64 00:08:20.040 Number of I/O Completion Queues: 64 00:08:20.040 00:08:20.040 ZNS Specific Controller Data 00:08:20.040 ============================ 00:08:20.040 Zone Append Size Limit: 0 00:08:20.040 00:08:20.040 00:08:20.040 Active Namespaces 00:08:20.040 ================= 00:08:20.040 Namespace ID:1 00:08:20.040 Error Recovery Timeout: Unlimited 00:08:20.040 Command Set Identifier: NVM (00h) 00:08:20.040 Deallocate: Supported 00:08:20.040 Deallocated/Unwritten Error: Supported 00:08:20.040 Deallocated Read Value: All 0x00 00:08:20.040 Deallocate in Write Zeroes: Not Supported 00:08:20.040 Deallocated Guard Field: 0xFFFF 00:08:20.040 Flush: Supported 00:08:20.040 Reservation: Not Supported 00:08:20.040 Namespace Sharing Capabilities: Private 00:08:20.040 Size (in LBAs): 1048576 (4GiB) 00:08:20.040 Capacity (in LBAs): 1048576 (4GiB) 00:08:20.040 Utilization (in LBAs): 1048576 (4GiB) 00:08:20.040 Thin Provisioning: Not Supported 00:08:20.040 Per-NS Atomic Units: No 00:08:20.040 Maximum Single Source Range Length: 128 00:08:20.040 Maximum Copy Length: 128 00:08:20.040 Maximum Source Range Count: 128 00:08:20.040 NGUID/EUI64 Never Reused: No 00:08:20.040 Namespace Write Protected: No 00:08:20.040 Number of LBA Formats: 8 00:08:20.040 Current LBA Format: LBA Format #04 00:08:20.040 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:20.040 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:20.040 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:20.040 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:20.040 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:20.040 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:20.040 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:20.040 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:20.040 00:08:20.040 NVM Specific Namespace Data 00:08:20.040 =========================== 00:08:20.040 Logical Block Storage Tag Mask: 0 00:08:20.040 Protection Information Capabilities: 00:08:20.040 16b Guard Protection Information Storage Tag Support: No 00:08:20.040 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:20.040 Storage Tag Check Read Support: No 00:08:20.040 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.040 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.040 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.040 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.040 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.040 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.040 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.040 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.040 Namespace ID:2 00:08:20.040 Error Recovery Timeout: Unlimited 00:08:20.040 Command Set Identifier: NVM (00h) 00:08:20.040 Deallocate: Supported 00:08:20.040 Deallocated/Unwritten Error: Supported 00:08:20.040 Deallocated Read Value: All 0x00 00:08:20.040 Deallocate in Write Zeroes: Not Supported 00:08:20.040 Deallocated Guard Field: 0xFFFF 00:08:20.040 Flush: Supported 00:08:20.040 Reservation: Not Supported 00:08:20.040 Namespace Sharing Capabilities: Private 00:08:20.040 Size (in LBAs): 1048576 (4GiB) 00:08:20.040 Capacity (in LBAs): 1048576 (4GiB) 00:08:20.040 Utilization (in LBAs): 1048576 (4GiB) 00:08:20.040 Thin Provisioning: Not Supported 00:08:20.040 Per-NS Atomic Units: No 00:08:20.040 Maximum Single Source Range Length: 128 00:08:20.040 Maximum Copy Length: 128 00:08:20.040 Maximum Source Range Count: 128 00:08:20.040 NGUID/EUI64 Never Reused: No 00:08:20.040 Namespace Write Protected: No 00:08:20.040 Number of LBA Formats: 8 00:08:20.040 Current LBA Format: LBA Format #04 00:08:20.040 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:20.040 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:20.040 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:20.040 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:20.040 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:20.040 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:20.040 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:20.040 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:20.040 00:08:20.040 NVM Specific Namespace Data 00:08:20.040 =========================== 00:08:20.041 Logical Block Storage Tag Mask: 0 00:08:20.041 Protection Information Capabilities: 00:08:20.041 16b Guard Protection Information Storage Tag Support: No 00:08:20.041 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:20.041 Storage Tag Check Read Support: No 00:08:20.041 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.041 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.041 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.041 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.041 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.041 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.041 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.041 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.041 Namespace ID:3 00:08:20.041 Error Recovery Timeout: Unlimited 00:08:20.041 Command Set Identifier: NVM (00h) 00:08:20.041 Deallocate: Supported 00:08:20.041 Deallocated/Unwritten Error: Supported 00:08:20.041 Deallocated Read Value: All 0x00 00:08:20.041 Deallocate in Write Zeroes: Not Supported 00:08:20.041 Deallocated Guard Field: 0xFFFF 00:08:20.041 Flush: Supported 00:08:20.041 Reservation: Not Supported 00:08:20.041 Namespace Sharing Capabilities: Private 00:08:20.041 Size (in LBAs): 1048576 (4GiB) 00:08:20.041 Capacity (in LBAs): 1048576 (4GiB) 00:08:20.041 Utilization (in LBAs): 1048576 (4GiB) 00:08:20.041 Thin Provisioning: Not Supported 00:08:20.041 Per-NS Atomic Units: No 00:08:20.041 Maximum Single Source Range Length: 128 00:08:20.041 Maximum Copy Length: 128 00:08:20.041 Maximum Source Range Count: 128 00:08:20.041 NGUID/EUI64 Never Reused: No 00:08:20.041 Namespace Write Protected: No 00:08:20.041 Number of LBA Formats: 8 00:08:20.041 Current LBA Format: LBA Format #04 00:08:20.041 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:20.041 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:20.041 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:20.041 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:20.041 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:20.041 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:20.041 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:20.041 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:20.041 00:08:20.041 NVM Specific Namespace Data 00:08:20.041 =========================== 00:08:20.041 Logical Block Storage Tag Mask: 0 00:08:20.041 Protection Information Capabilities: 00:08:20.041 16b Guard Protection Information Storage Tag Support: No 00:08:20.041 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:20.041 Storage Tag Check Read Support: No 00:08:20.041 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.041 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.041 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.041 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.041 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.041 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.041 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.041 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.041 05:10:13 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:20.041 05:10:13 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:08:20.303 ===================================================== 00:08:20.303 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:20.303 ===================================================== 00:08:20.303 Controller Capabilities/Features 00:08:20.303 ================================ 00:08:20.303 Vendor ID: 1b36 00:08:20.303 Subsystem Vendor ID: 1af4 00:08:20.303 Serial Number: 12343 00:08:20.303 Model Number: QEMU NVMe Ctrl 00:08:20.303 Firmware Version: 8.0.0 00:08:20.303 Recommended Arb Burst: 6 00:08:20.303 IEEE OUI Identifier: 00 54 52 00:08:20.303 Multi-path I/O 00:08:20.303 May have multiple subsystem ports: No 00:08:20.303 May have multiple controllers: Yes 00:08:20.303 Associated with SR-IOV VF: No 00:08:20.303 Max Data Transfer Size: 524288 00:08:20.303 Max Number of Namespaces: 256 00:08:20.303 Max Number of I/O Queues: 64 00:08:20.303 NVMe Specification Version (VS): 1.4 00:08:20.303 NVMe Specification Version (Identify): 1.4 00:08:20.303 Maximum Queue Entries: 2048 00:08:20.303 Contiguous Queues Required: Yes 00:08:20.303 Arbitration Mechanisms Supported 00:08:20.303 Weighted Round Robin: Not Supported 00:08:20.303 Vendor Specific: Not Supported 00:08:20.303 Reset Timeout: 7500 ms 00:08:20.303 Doorbell Stride: 4 bytes 00:08:20.303 NVM Subsystem Reset: Not Supported 00:08:20.303 Command Sets Supported 00:08:20.303 NVM Command Set: Supported 00:08:20.303 Boot Partition: Not Supported 00:08:20.303 Memory Page Size Minimum: 4096 bytes 00:08:20.303 Memory Page Size Maximum: 65536 bytes 00:08:20.303 Persistent Memory Region: Not Supported 00:08:20.303 Optional Asynchronous Events Supported 00:08:20.303 Namespace Attribute Notices: Supported 00:08:20.303 Firmware Activation Notices: Not Supported 00:08:20.303 ANA Change Notices: Not Supported 00:08:20.303 PLE Aggregate Log Change Notices: Not Supported 00:08:20.303 LBA Status Info Alert Notices: Not Supported 00:08:20.303 EGE Aggregate Log Change Notices: Not Supported 00:08:20.303 Normal NVM Subsystem Shutdown event: Not Supported 00:08:20.303 Zone Descriptor Change Notices: Not Supported 00:08:20.303 Discovery Log Change Notices: Not Supported 00:08:20.303 Controller Attributes 00:08:20.303 128-bit Host Identifier: Not Supported 00:08:20.303 Non-Operational Permissive Mode: Not Supported 00:08:20.303 NVM Sets: Not Supported 00:08:20.303 Read Recovery Levels: Not Supported 00:08:20.303 Endurance Groups: Supported 00:08:20.303 Predictable Latency Mode: Not Supported 00:08:20.303 Traffic Based Keep ALive: Not Supported 00:08:20.303 Namespace Granularity: Not Supported 00:08:20.303 SQ Associations: Not Supported 00:08:20.303 UUID List: Not Supported 00:08:20.303 Multi-Domain Subsystem: Not Supported 00:08:20.303 Fixed Capacity Management: Not Supported 00:08:20.303 Variable Capacity Management: Not Supported 00:08:20.303 Delete Endurance Group: Not Supported 00:08:20.303 Delete NVM Set: Not Supported 00:08:20.303 Extended LBA Formats Supported: Supported 00:08:20.303 Flexible Data Placement Supported: Supported 00:08:20.303 00:08:20.303 Controller Memory Buffer Support 00:08:20.303 ================================ 00:08:20.303 Supported: No 00:08:20.303 00:08:20.303 Persistent Memory Region Support 00:08:20.303 ================================ 00:08:20.303 Supported: No 00:08:20.303 00:08:20.303 Admin Command Set Attributes 00:08:20.303 ============================ 00:08:20.303 Security Send/Receive: Not Supported 00:08:20.303 Format NVM: Supported 00:08:20.303 Firmware Activate/Download: Not Supported 00:08:20.303 Namespace Management: Supported 00:08:20.303 Device Self-Test: Not Supported 00:08:20.303 Directives: Supported 00:08:20.303 NVMe-MI: Not Supported 00:08:20.303 Virtualization Management: Not Supported 00:08:20.303 Doorbell Buffer Config: Supported 00:08:20.303 Get LBA Status Capability: Not Supported 00:08:20.303 Command & Feature Lockdown Capability: Not Supported 00:08:20.303 Abort Command Limit: 4 00:08:20.303 Async Event Request Limit: 4 00:08:20.303 Number of Firmware Slots: N/A 00:08:20.303 Firmware Slot 1 Read-Only: N/A 00:08:20.303 Firmware Activation Without Reset: N/A 00:08:20.303 Multiple Update Detection Support: N/A 00:08:20.303 Firmware Update Granularity: No Information Provided 00:08:20.303 Per-Namespace SMART Log: Yes 00:08:20.303 Asymmetric Namespace Access Log Page: Not Supported 00:08:20.303 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:20.303 Command Effects Log Page: Supported 00:08:20.303 Get Log Page Extended Data: Supported 00:08:20.303 Telemetry Log Pages: Not Supported 00:08:20.303 Persistent Event Log Pages: Not Supported 00:08:20.303 Supported Log Pages Log Page: May Support 00:08:20.303 Commands Supported & Effects Log Page: Not Supported 00:08:20.303 Feature Identifiers & Effects Log Page:May Support 00:08:20.303 NVMe-MI Commands & Effects Log Page: May Support 00:08:20.303 Data Area 4 for Telemetry Log: Not Supported 00:08:20.303 Error Log Page Entries Supported: 1 00:08:20.303 Keep Alive: Not Supported 00:08:20.303 00:08:20.303 NVM Command Set Attributes 00:08:20.303 ========================== 00:08:20.303 Submission Queue Entry Size 00:08:20.303 Max: 64 00:08:20.304 Min: 64 00:08:20.304 Completion Queue Entry Size 00:08:20.304 Max: 16 00:08:20.304 Min: 16 00:08:20.304 Number of Namespaces: 256 00:08:20.304 Compare Command: Supported 00:08:20.304 Write Uncorrectable Command: Not Supported 00:08:20.304 Dataset Management Command: Supported 00:08:20.304 Write Zeroes Command: Supported 00:08:20.304 Set Features Save Field: Supported 00:08:20.304 Reservations: Not Supported 00:08:20.304 Timestamp: Supported 00:08:20.304 Copy: Supported 00:08:20.304 Volatile Write Cache: Present 00:08:20.304 Atomic Write Unit (Normal): 1 00:08:20.304 Atomic Write Unit (PFail): 1 00:08:20.304 Atomic Compare & Write Unit: 1 00:08:20.304 Fused Compare & Write: Not Supported 00:08:20.304 Scatter-Gather List 00:08:20.304 SGL Command Set: Supported 00:08:20.304 SGL Keyed: Not Supported 00:08:20.304 SGL Bit Bucket Descriptor: Not Supported 00:08:20.304 SGL Metadata Pointer: Not Supported 00:08:20.304 Oversized SGL: Not Supported 00:08:20.304 SGL Metadata Address: Not Supported 00:08:20.304 SGL Offset: Not Supported 00:08:20.304 Transport SGL Data Block: Not Supported 00:08:20.304 Replay Protected Memory Block: Not Supported 00:08:20.304 00:08:20.304 Firmware Slot Information 00:08:20.304 ========================= 00:08:20.304 Active slot: 1 00:08:20.304 Slot 1 Firmware Revision: 1.0 00:08:20.304 00:08:20.304 00:08:20.304 Commands Supported and Effects 00:08:20.304 ============================== 00:08:20.304 Admin Commands 00:08:20.304 -------------- 00:08:20.304 Delete I/O Submission Queue (00h): Supported 00:08:20.304 Create I/O Submission Queue (01h): Supported 00:08:20.304 Get Log Page (02h): Supported 00:08:20.304 Delete I/O Completion Queue (04h): Supported 00:08:20.304 Create I/O Completion Queue (05h): Supported 00:08:20.304 Identify (06h): Supported 00:08:20.304 Abort (08h): Supported 00:08:20.304 Set Features (09h): Supported 00:08:20.304 Get Features (0Ah): Supported 00:08:20.304 Asynchronous Event Request (0Ch): Supported 00:08:20.304 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:20.304 Directive Send (19h): Supported 00:08:20.304 Directive Receive (1Ah): Supported 00:08:20.304 Virtualization Management (1Ch): Supported 00:08:20.304 Doorbell Buffer Config (7Ch): Supported 00:08:20.304 Format NVM (80h): Supported LBA-Change 00:08:20.304 I/O Commands 00:08:20.304 ------------ 00:08:20.304 Flush (00h): Supported LBA-Change 00:08:20.304 Write (01h): Supported LBA-Change 00:08:20.304 Read (02h): Supported 00:08:20.304 Compare (05h): Supported 00:08:20.304 Write Zeroes (08h): Supported LBA-Change 00:08:20.304 Dataset Management (09h): Supported LBA-Change 00:08:20.304 Unknown (0Ch): Supported 00:08:20.304 Unknown (12h): Supported 00:08:20.304 Copy (19h): Supported LBA-Change 00:08:20.304 Unknown (1Dh): Supported LBA-Change 00:08:20.304 00:08:20.304 Error Log 00:08:20.304 ========= 00:08:20.304 00:08:20.304 Arbitration 00:08:20.304 =========== 00:08:20.304 Arbitration Burst: no limit 00:08:20.304 00:08:20.304 Power Management 00:08:20.304 ================ 00:08:20.304 Number of Power States: 1 00:08:20.304 Current Power State: Power State #0 00:08:20.304 Power State #0: 00:08:20.304 Max Power: 25.00 W 00:08:20.304 Non-Operational State: Operational 00:08:20.304 Entry Latency: 16 microseconds 00:08:20.304 Exit Latency: 4 microseconds 00:08:20.304 Relative Read Throughput: 0 00:08:20.304 Relative Read Latency: 0 00:08:20.304 Relative Write Throughput: 0 00:08:20.304 Relative Write Latency: 0 00:08:20.304 Idle Power: Not Reported 00:08:20.304 Active Power: Not Reported 00:08:20.304 Non-Operational Permissive Mode: Not Supported 00:08:20.304 00:08:20.304 Health Information 00:08:20.304 ================== 00:08:20.304 Critical Warnings: 00:08:20.304 Available Spare Space: OK 00:08:20.304 Temperature: OK 00:08:20.304 Device Reliability: OK 00:08:20.304 Read Only: No 00:08:20.304 Volatile Memory Backup: OK 00:08:20.304 Current Temperature: 323 Kelvin (50 Celsius) 00:08:20.304 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:20.304 Available Spare: 0% 00:08:20.304 Available Spare Threshold: 0% 00:08:20.304 Life Percentage Used: 0% 00:08:20.304 Data Units Read: 950 00:08:20.304 Data Units Written: 879 00:08:20.304 Host Read Commands: 42840 00:08:20.304 Host Write Commands: 42263 00:08:20.304 Controller Busy Time: 0 minutes 00:08:20.304 Power Cycles: 0 00:08:20.304 Power On Hours: 0 hours 00:08:20.304 Unsafe Shutdowns: 0 00:08:20.304 Unrecoverable Media Errors: 0 00:08:20.304 Lifetime Error Log Entries: 0 00:08:20.304 Warning Temperature Time: 0 minutes 00:08:20.304 Critical Temperature Time: 0 minutes 00:08:20.304 00:08:20.304 Number of Queues 00:08:20.304 ================ 00:08:20.304 Number of I/O Submission Queues: 64 00:08:20.304 Number of I/O Completion Queues: 64 00:08:20.304 00:08:20.304 ZNS Specific Controller Data 00:08:20.304 ============================ 00:08:20.304 Zone Append Size Limit: 0 00:08:20.304 00:08:20.304 00:08:20.304 Active Namespaces 00:08:20.304 ================= 00:08:20.304 Namespace ID:1 00:08:20.304 Error Recovery Timeout: Unlimited 00:08:20.304 Command Set Identifier: NVM (00h) 00:08:20.304 Deallocate: Supported 00:08:20.304 Deallocated/Unwritten Error: Supported 00:08:20.304 Deallocated Read Value: All 0x00 00:08:20.304 Deallocate in Write Zeroes: Not Supported 00:08:20.304 Deallocated Guard Field: 0xFFFF 00:08:20.304 Flush: Supported 00:08:20.304 Reservation: Not Supported 00:08:20.304 Namespace Sharing Capabilities: Multiple Controllers 00:08:20.304 Size (in LBAs): 262144 (1GiB) 00:08:20.304 Capacity (in LBAs): 262144 (1GiB) 00:08:20.304 Utilization (in LBAs): 262144 (1GiB) 00:08:20.304 Thin Provisioning: Not Supported 00:08:20.304 Per-NS Atomic Units: No 00:08:20.304 Maximum Single Source Range Length: 128 00:08:20.304 Maximum Copy Length: 128 00:08:20.304 Maximum Source Range Count: 128 00:08:20.304 NGUID/EUI64 Never Reused: No 00:08:20.304 Namespace Write Protected: No 00:08:20.304 Endurance group ID: 1 00:08:20.304 Number of LBA Formats: 8 00:08:20.304 Current LBA Format: LBA Format #04 00:08:20.304 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:20.304 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:20.304 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:20.304 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:20.304 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:20.304 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:20.304 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:20.304 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:20.304 00:08:20.304 Get Feature FDP: 00:08:20.304 ================ 00:08:20.304 Enabled: Yes 00:08:20.304 FDP configuration index: 0 00:08:20.304 00:08:20.304 FDP configurations log page 00:08:20.304 =========================== 00:08:20.304 Number of FDP configurations: 1 00:08:20.304 Version: 0 00:08:20.304 Size: 112 00:08:20.304 FDP Configuration Descriptor: 0 00:08:20.304 Descriptor Size: 96 00:08:20.304 Reclaim Group Identifier format: 2 00:08:20.304 FDP Volatile Write Cache: Not Present 00:08:20.304 FDP Configuration: Valid 00:08:20.304 Vendor Specific Size: 0 00:08:20.304 Number of Reclaim Groups: 2 00:08:20.304 Number of Recalim Unit Handles: 8 00:08:20.304 Max Placement Identifiers: 128 00:08:20.304 Number of Namespaces Suppprted: 256 00:08:20.304 Reclaim unit Nominal Size: 6000000 bytes 00:08:20.304 Estimated Reclaim Unit Time Limit: Not Reported 00:08:20.304 RUH Desc #000: RUH Type: Initially Isolated 00:08:20.304 RUH Desc #001: RUH Type: Initially Isolated 00:08:20.304 RUH Desc #002: RUH Type: Initially Isolated 00:08:20.304 RUH Desc #003: RUH Type: Initially Isolated 00:08:20.304 RUH Desc #004: RUH Type: Initially Isolated 00:08:20.304 RUH Desc #005: RUH Type: Initially Isolated 00:08:20.304 RUH Desc #006: RUH Type: Initially Isolated 00:08:20.304 RUH Desc #007: RUH Type: Initially Isolated 00:08:20.304 00:08:20.304 FDP reclaim unit handle usage log page 00:08:20.304 ====================================== 00:08:20.304 Number of Reclaim Unit Handles: 8 00:08:20.304 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:20.304 RUH Usage Desc #001: RUH Attributes: Unused 00:08:20.304 RUH Usage Desc #002: RUH Attributes: Unused 00:08:20.304 RUH Usage Desc #003: RUH Attributes: Unused 00:08:20.304 RUH Usage Desc #004: RUH Attributes: Unused 00:08:20.304 RUH Usage Desc #005: RUH Attributes: Unused 00:08:20.304 RUH Usage Desc #006: RUH Attributes: Unused 00:08:20.304 RUH Usage Desc #007: RUH Attributes: Unused 00:08:20.305 00:08:20.305 FDP statistics log page 00:08:20.305 ======================= 00:08:20.305 Host bytes with metadata written: 554213376 00:08:20.305 Media bytes with metadata written: 555798528 00:08:20.305 Media bytes erased: 0 00:08:20.305 00:08:20.305 FDP events log page 00:08:20.305 =================== 00:08:20.305 Number of FDP events: 0 00:08:20.305 00:08:20.305 NVM Specific Namespace Data 00:08:20.305 =========================== 00:08:20.305 Logical Block Storage Tag Mask: 0 00:08:20.305 Protection Information Capabilities: 00:08:20.305 16b Guard Protection Information Storage Tag Support: No 00:08:20.305 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:20.305 Storage Tag Check Read Support: No 00:08:20.305 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.305 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.305 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.305 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.305 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.305 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.305 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.305 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.305 00:08:20.305 real 0m1.010s 00:08:20.305 user 0m0.358s 00:08:20.305 sys 0m0.452s 00:08:20.305 05:10:13 nvme.nvme_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:20.305 ************************************ 00:08:20.305 END TEST nvme_identify 00:08:20.305 05:10:13 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:08:20.305 ************************************ 00:08:20.305 05:10:13 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:08:20.305 05:10:13 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:20.305 05:10:13 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:20.305 05:10:13 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:20.305 ************************************ 00:08:20.305 START TEST nvme_perf 00:08:20.305 ************************************ 00:08:20.305 05:10:13 nvme.nvme_perf -- common/autotest_common.sh@1125 -- # nvme_perf 00:08:20.305 05:10:13 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:08:21.690 Initializing NVMe Controllers 00:08:21.690 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:21.690 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:21.690 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:21.690 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:21.690 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:21.690 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:21.690 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:21.690 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:21.690 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:21.690 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:21.690 Initialization complete. Launching workers. 00:08:21.690 ======================================================== 00:08:21.690 Latency(us) 00:08:21.690 Device Information : IOPS MiB/s Average min max 00:08:21.690 PCIE (0000:00:10.0) NSID 1 from core 0: 14562.63 170.66 8791.52 4538.11 31010.02 00:08:21.690 PCIE (0000:00:11.0) NSID 1 from core 0: 14562.63 170.66 8785.65 4443.03 30297.85 00:08:21.690 PCIE (0000:00:13.0) NSID 1 from core 0: 14562.63 170.66 8778.52 3932.14 30380.38 00:08:21.690 PCIE (0000:00:12.0) NSID 1 from core 0: 14562.63 170.66 8771.37 3692.36 29875.08 00:08:21.690 PCIE (0000:00:12.0) NSID 2 from core 0: 14562.63 170.66 8764.09 3538.07 29453.31 00:08:21.690 PCIE (0000:00:12.0) NSID 3 from core 0: 14626.50 171.40 8718.75 3308.72 24584.21 00:08:21.690 ======================================================== 00:08:21.690 Total : 87439.63 1024.68 8768.28 3308.72 31010.02 00:08:21.690 00:08:21.690 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:21.690 ================================================================================= 00:08:21.690 1.00000% : 5923.446us 00:08:21.690 10.00000% : 6225.920us 00:08:21.690 25.00000% : 6604.012us 00:08:21.690 50.00000% : 8872.566us 00:08:21.690 75.00000% : 9880.812us 00:08:21.690 90.00000% : 11796.480us 00:08:21.690 95.00000% : 13712.148us 00:08:21.690 98.00000% : 15426.166us 00:08:21.690 99.00000% : 18652.554us 00:08:21.690 99.50000% : 25811.102us 00:08:21.690 99.90000% : 30852.332us 00:08:21.690 99.99000% : 31053.982us 00:08:21.690 99.99900% : 31053.982us 00:08:21.690 99.99990% : 31053.982us 00:08:21.690 99.99999% : 31053.982us 00:08:21.690 00:08:21.690 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:21.690 ================================================================================= 00:08:21.690 1.00000% : 5999.065us 00:08:21.690 10.00000% : 6276.332us 00:08:21.690 25.00000% : 6604.012us 00:08:21.690 50.00000% : 8922.978us 00:08:21.690 75.00000% : 9880.812us 00:08:21.690 90.00000% : 11796.480us 00:08:21.690 95.00000% : 13409.674us 00:08:21.690 98.00000% : 15526.991us 00:08:21.690 99.00000% : 18148.431us 00:08:21.690 99.50000% : 25508.628us 00:08:21.690 99.90000% : 30247.385us 00:08:21.690 99.99000% : 30449.034us 00:08:21.690 99.99900% : 30449.034us 00:08:21.690 99.99990% : 30449.034us 00:08:21.690 99.99999% : 30449.034us 00:08:21.690 00:08:21.690 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:21.690 ================================================================================= 00:08:21.690 1.00000% : 5948.652us 00:08:21.690 10.00000% : 6276.332us 00:08:21.690 25.00000% : 6604.012us 00:08:21.690 50.00000% : 8872.566us 00:08:21.690 75.00000% : 9931.225us 00:08:21.690 90.00000% : 11947.717us 00:08:21.690 95.00000% : 13409.674us 00:08:21.690 98.00000% : 15829.465us 00:08:21.690 99.00000% : 17442.658us 00:08:21.690 99.50000% : 25407.803us 00:08:21.690 99.90000% : 30247.385us 00:08:21.690 99.99000% : 30449.034us 00:08:21.690 99.99900% : 30449.034us 00:08:21.690 99.99990% : 30449.034us 00:08:21.690 99.99999% : 30449.034us 00:08:21.690 00:08:21.690 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:21.690 ================================================================================= 00:08:21.690 1.00000% : 5923.446us 00:08:21.690 10.00000% : 6251.126us 00:08:21.690 25.00000% : 6604.012us 00:08:21.690 50.00000% : 8922.978us 00:08:21.690 75.00000% : 9880.812us 00:08:21.690 90.00000% : 11796.480us 00:08:21.690 95.00000% : 13510.498us 00:08:21.690 98.00000% : 15426.166us 00:08:21.690 99.00000% : 18047.606us 00:08:21.690 99.50000% : 25004.505us 00:08:21.690 99.90000% : 29844.086us 00:08:21.690 99.99000% : 30045.735us 00:08:21.690 99.99900% : 30045.735us 00:08:21.690 99.99990% : 30045.735us 00:08:21.690 99.99999% : 30045.735us 00:08:21.690 00:08:21.690 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:21.690 ================================================================================= 00:08:21.690 1.00000% : 5948.652us 00:08:21.690 10.00000% : 6276.332us 00:08:21.690 25.00000% : 6604.012us 00:08:21.690 50.00000% : 8922.978us 00:08:21.690 75.00000% : 9880.812us 00:08:21.690 90.00000% : 11796.480us 00:08:21.690 95.00000% : 13611.323us 00:08:21.690 98.00000% : 14922.043us 00:08:21.690 99.00000% : 18450.905us 00:08:21.690 99.50000% : 24500.382us 00:08:21.690 99.90000% : 29440.788us 00:08:21.690 99.99000% : 29440.788us 00:08:21.690 99.99900% : 29642.437us 00:08:21.690 99.99990% : 29642.437us 00:08:21.690 99.99999% : 29642.437us 00:08:21.690 00:08:21.690 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:21.690 ================================================================================= 00:08:21.690 1.00000% : 5948.652us 00:08:21.690 10.00000% : 6251.126us 00:08:21.690 25.00000% : 6604.012us 00:08:21.690 50.00000% : 8922.978us 00:08:21.690 75.00000% : 9830.400us 00:08:21.690 90.00000% : 11796.480us 00:08:21.690 95.00000% : 13712.148us 00:08:21.690 98.00000% : 14720.394us 00:08:21.690 99.00000% : 18753.378us 00:08:21.690 99.50000% : 19862.449us 00:08:21.690 99.90000% : 24399.557us 00:08:21.690 99.99000% : 24601.206us 00:08:21.690 99.99900% : 24601.206us 00:08:21.690 99.99990% : 24601.206us 00:08:21.690 99.99999% : 24601.206us 00:08:21.690 00:08:21.690 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:21.690 ============================================================================== 00:08:21.690 Range in us Cumulative IO count 00:08:21.690 4537.108 - 4562.314: 0.0206% ( 3) 00:08:21.690 4562.314 - 4587.520: 0.0411% ( 3) 00:08:21.690 4587.520 - 4612.726: 0.0617% ( 3) 00:08:21.690 4612.726 - 4637.932: 0.0754% ( 2) 00:08:21.690 4637.932 - 4663.138: 0.0891% ( 2) 00:08:21.690 4663.138 - 4688.345: 0.0959% ( 1) 00:08:21.690 4688.345 - 4713.551: 0.1096% ( 2) 00:08:21.690 4713.551 - 4738.757: 0.1234% ( 2) 00:08:21.690 4738.757 - 4763.963: 0.1371% ( 2) 00:08:21.690 4763.963 - 4789.169: 0.1508% ( 2) 00:08:21.690 4789.169 - 4814.375: 0.1645% ( 2) 00:08:21.690 4814.375 - 4839.582: 0.1782% ( 2) 00:08:21.690 4839.582 - 4864.788: 0.1919% ( 2) 00:08:21.690 4864.788 - 4889.994: 0.2124% ( 3) 00:08:21.690 4889.994 - 4915.200: 0.2193% ( 1) 00:08:21.690 4915.200 - 4940.406: 0.2399% ( 3) 00:08:21.690 4940.406 - 4965.612: 0.2467% ( 1) 00:08:21.690 4965.612 - 4990.818: 0.2604% ( 2) 00:08:21.690 4990.818 - 5016.025: 0.2741% ( 2) 00:08:21.690 5016.025 - 5041.231: 0.2878% ( 2) 00:08:21.690 5041.231 - 5066.437: 0.3015% ( 2) 00:08:21.690 5066.437 - 5091.643: 0.3152% ( 2) 00:08:21.690 5091.643 - 5116.849: 0.3289% ( 2) 00:08:21.690 5116.849 - 5142.055: 0.3427% ( 2) 00:08:21.690 5142.055 - 5167.262: 0.3564% ( 2) 00:08:21.690 5167.262 - 5192.468: 0.3769% ( 3) 00:08:21.690 5192.468 - 5217.674: 0.3838% ( 1) 00:08:21.690 5217.674 - 5242.880: 0.4043% ( 3) 00:08:21.690 5242.880 - 5268.086: 0.4112% ( 1) 00:08:21.690 5268.086 - 5293.292: 0.4249% ( 2) 00:08:21.690 5293.292 - 5318.498: 0.4386% ( 2) 00:08:21.690 5721.797 - 5747.003: 0.4523% ( 2) 00:08:21.690 5747.003 - 5772.209: 0.4866% ( 5) 00:08:21.690 5772.209 - 5797.415: 0.5482% ( 9) 00:08:21.690 5797.415 - 5822.622: 0.5620% ( 2) 00:08:21.690 5822.622 - 5847.828: 0.6168% ( 8) 00:08:21.690 5847.828 - 5873.034: 0.7333% ( 17) 00:08:21.690 5873.034 - 5898.240: 0.9320% ( 29) 00:08:21.690 5898.240 - 5923.446: 1.1719% ( 35) 00:08:21.690 5923.446 - 5948.652: 1.4323% ( 38) 00:08:21.690 5948.652 - 5973.858: 1.9874% ( 81) 00:08:21.690 5973.858 - 5999.065: 2.5151% ( 77) 00:08:21.690 5999.065 - 6024.271: 3.1730% ( 96) 00:08:21.690 6024.271 - 6049.477: 3.9679% ( 116) 00:08:21.690 6049.477 - 6074.683: 4.6738% ( 103) 00:08:21.690 6074.683 - 6099.889: 5.5784% ( 132) 00:08:21.690 6099.889 - 6125.095: 6.4145% ( 122) 00:08:21.690 6125.095 - 6150.302: 7.4356% ( 149) 00:08:21.690 6150.302 - 6175.508: 8.4841% ( 153) 00:08:21.690 6175.508 - 6200.714: 9.4298% ( 138) 00:08:21.690 6200.714 - 6225.920: 10.2865% ( 125) 00:08:21.690 6225.920 - 6251.126: 11.2733% ( 144) 00:08:21.690 6251.126 - 6276.332: 12.2738% ( 146) 00:08:21.690 6276.332 - 6301.538: 13.1853% ( 133) 00:08:21.690 6301.538 - 6326.745: 14.1790% ( 145) 00:08:21.690 6326.745 - 6351.951: 15.1796% ( 146) 00:08:21.690 6351.951 - 6377.157: 16.2007% ( 149) 00:08:21.691 6377.157 - 6402.363: 17.2012% ( 146) 00:08:21.691 6402.363 - 6427.569: 18.2360% ( 151) 00:08:21.691 6427.569 - 6452.775: 19.1338% ( 131) 00:08:21.691 6452.775 - 6503.188: 21.2651% ( 311) 00:08:21.691 6503.188 - 6553.600: 23.2388% ( 288) 00:08:21.691 6553.600 - 6604.012: 25.3289% ( 305) 00:08:21.691 6604.012 - 6654.425: 27.5014% ( 317) 00:08:21.691 6654.425 - 6704.837: 29.5641% ( 301) 00:08:21.691 6704.837 - 6755.249: 31.6817% ( 309) 00:08:21.691 6755.249 - 6805.662: 33.6965% ( 294) 00:08:21.691 6805.662 - 6856.074: 35.8210% ( 310) 00:08:21.691 6856.074 - 6906.486: 37.6987% ( 274) 00:08:21.691 6906.486 - 6956.898: 39.1790% ( 216) 00:08:21.691 6956.898 - 7007.311: 40.5085% ( 194) 00:08:21.691 7007.311 - 7057.723: 41.3103% ( 117) 00:08:21.691 7057.723 - 7108.135: 41.8860% ( 84) 00:08:21.691 7108.135 - 7158.548: 42.2834% ( 58) 00:08:21.691 7158.548 - 7208.960: 42.6192% ( 49) 00:08:21.691 7208.960 - 7259.372: 42.8865% ( 39) 00:08:21.691 7259.372 - 7309.785: 43.1401% ( 37) 00:08:21.691 7309.785 - 7360.197: 43.3457% ( 30) 00:08:21.691 7360.197 - 7410.609: 43.5787% ( 34) 00:08:21.691 7410.609 - 7461.022: 43.8185% ( 35) 00:08:21.691 7461.022 - 7511.434: 44.0173% ( 29) 00:08:21.691 7511.434 - 7561.846: 44.2297% ( 31) 00:08:21.691 7561.846 - 7612.258: 44.4559% ( 33) 00:08:21.691 7612.258 - 7662.671: 44.6615% ( 30) 00:08:21.691 7662.671 - 7713.083: 44.7780% ( 17) 00:08:21.691 7713.083 - 7763.495: 44.9150% ( 20) 00:08:21.691 7763.495 - 7813.908: 45.0178% ( 15) 00:08:21.691 7813.908 - 7864.320: 45.1069% ( 13) 00:08:21.691 7864.320 - 7914.732: 45.1617% ( 8) 00:08:21.691 7914.732 - 7965.145: 45.1891% ( 4) 00:08:21.691 7965.145 - 8015.557: 45.2234% ( 5) 00:08:21.691 8015.557 - 8065.969: 45.2714% ( 7) 00:08:21.691 8065.969 - 8116.382: 45.3194% ( 7) 00:08:21.691 8116.382 - 8166.794: 45.4084% ( 13) 00:08:21.691 8166.794 - 8217.206: 45.5318% ( 18) 00:08:21.691 8217.206 - 8267.618: 45.7442% ( 31) 00:08:21.691 8267.618 - 8318.031: 45.9635% ( 32) 00:08:21.691 8318.031 - 8368.443: 46.1349% ( 25) 00:08:21.691 8368.443 - 8418.855: 46.3199% ( 27) 00:08:21.691 8418.855 - 8469.268: 46.5803% ( 38) 00:08:21.691 8469.268 - 8519.680: 46.7791% ( 29) 00:08:21.691 8519.680 - 8570.092: 47.1286% ( 51) 00:08:21.691 8570.092 - 8620.505: 47.4644% ( 49) 00:08:21.691 8620.505 - 8670.917: 47.8618% ( 58) 00:08:21.691 8670.917 - 8721.329: 48.3827% ( 76) 00:08:21.691 8721.329 - 8771.742: 48.8692% ( 71) 00:08:21.691 8771.742 - 8822.154: 49.4518% ( 85) 00:08:21.691 8822.154 - 8872.566: 50.3975% ( 138) 00:08:21.691 8872.566 - 8922.978: 51.3980% ( 146) 00:08:21.691 8922.978 - 8973.391: 52.3780% ( 143) 00:08:21.691 8973.391 - 9023.803: 53.4128% ( 151) 00:08:21.691 9023.803 - 9074.215: 54.7286% ( 192) 00:08:21.691 9074.215 - 9124.628: 55.8868% ( 169) 00:08:21.691 9124.628 - 9175.040: 57.0792% ( 174) 00:08:21.691 9175.040 - 9225.452: 58.2991% ( 178) 00:08:21.691 9225.452 - 9275.865: 59.6423% ( 196) 00:08:21.691 9275.865 - 9326.277: 60.9786% ( 195) 00:08:21.691 9326.277 - 9376.689: 62.2396% ( 184) 00:08:21.691 9376.689 - 9427.102: 63.5074% ( 185) 00:08:21.691 9427.102 - 9477.514: 64.8506% ( 196) 00:08:21.691 9477.514 - 9527.926: 66.1870% ( 195) 00:08:21.691 9527.926 - 9578.338: 67.5918% ( 205) 00:08:21.691 9578.338 - 9628.751: 68.9282% ( 195) 00:08:21.691 9628.751 - 9679.163: 70.3468% ( 207) 00:08:21.691 9679.163 - 9729.575: 71.5049% ( 169) 00:08:21.691 9729.575 - 9779.988: 72.6562% ( 168) 00:08:21.691 9779.988 - 9830.400: 73.9446% ( 188) 00:08:21.691 9830.400 - 9880.812: 75.0274% ( 158) 00:08:21.691 9880.812 - 9931.225: 75.9457% ( 134) 00:08:21.691 9931.225 - 9981.637: 76.7887% ( 123) 00:08:21.691 9981.637 - 10032.049: 77.7001% ( 133) 00:08:21.691 10032.049 - 10082.462: 78.5019% ( 117) 00:08:21.691 10082.462 - 10132.874: 79.1187% ( 90) 00:08:21.691 10132.874 - 10183.286: 79.6806% ( 82) 00:08:21.691 10183.286 - 10233.698: 80.4071% ( 106) 00:08:21.691 10233.698 - 10284.111: 81.0238% ( 90) 00:08:21.691 10284.111 - 10334.523: 81.5789% ( 81) 00:08:21.691 10334.523 - 10384.935: 82.0929% ( 75) 00:08:21.691 10384.935 - 10435.348: 82.5110% ( 61) 00:08:21.691 10435.348 - 10485.760: 83.0798% ( 83) 00:08:21.691 10485.760 - 10536.172: 83.4841% ( 59) 00:08:21.691 10536.172 - 10586.585: 83.9433% ( 67) 00:08:21.691 10586.585 - 10636.997: 84.3544% ( 60) 00:08:21.691 10636.997 - 10687.409: 84.6971% ( 50) 00:08:21.691 10687.409 - 10737.822: 85.0535% ( 52) 00:08:21.691 10737.822 - 10788.234: 85.3893% ( 49) 00:08:21.691 10788.234 - 10838.646: 85.7045% ( 46) 00:08:21.691 10838.646 - 10889.058: 86.1637% ( 67) 00:08:21.691 10889.058 - 10939.471: 86.5269% ( 53) 00:08:21.691 10939.471 - 10989.883: 86.8558% ( 48) 00:08:21.691 10989.883 - 11040.295: 87.1436% ( 42) 00:08:21.691 11040.295 - 11090.708: 87.4863% ( 50) 00:08:21.691 11090.708 - 11141.120: 87.7467% ( 38) 00:08:21.691 11141.120 - 11191.532: 88.0414% ( 43) 00:08:21.691 11191.532 - 11241.945: 88.2127% ( 25) 00:08:21.691 11241.945 - 11292.357: 88.3635% ( 22) 00:08:21.691 11292.357 - 11342.769: 88.5691% ( 30) 00:08:21.691 11342.769 - 11393.182: 88.7267% ( 23) 00:08:21.691 11393.182 - 11443.594: 88.8706% ( 21) 00:08:21.691 11443.594 - 11494.006: 89.0556% ( 27) 00:08:21.691 11494.006 - 11544.418: 89.2749% ( 32) 00:08:21.691 11544.418 - 11594.831: 89.4463% ( 25) 00:08:21.691 11594.831 - 11645.243: 89.6519% ( 30) 00:08:21.691 11645.243 - 11695.655: 89.8369% ( 27) 00:08:21.691 11695.655 - 11746.068: 89.9945% ( 23) 00:08:21.691 11746.068 - 11796.480: 90.1933% ( 29) 00:08:21.691 11796.480 - 11846.892: 90.3783% ( 27) 00:08:21.691 11846.892 - 11897.305: 90.5154% ( 20) 00:08:21.691 11897.305 - 11947.717: 90.6798% ( 24) 00:08:21.691 11947.717 - 11998.129: 90.8512% ( 25) 00:08:21.691 11998.129 - 12048.542: 91.0019% ( 22) 00:08:21.691 12048.542 - 12098.954: 91.1527% ( 22) 00:08:21.691 12098.954 - 12149.366: 91.2760% ( 18) 00:08:21.691 12149.366 - 12199.778: 91.4611% ( 27) 00:08:21.691 12199.778 - 12250.191: 91.6118% ( 22) 00:08:21.691 12250.191 - 12300.603: 91.7900% ( 26) 00:08:21.691 12300.603 - 12351.015: 91.9339% ( 21) 00:08:21.691 12351.015 - 12401.428: 92.1121% ( 26) 00:08:21.691 12401.428 - 12451.840: 92.2560% ( 21) 00:08:21.691 12451.840 - 12502.252: 92.4479% ( 28) 00:08:21.691 12502.252 - 12552.665: 92.5850% ( 20) 00:08:21.691 12552.665 - 12603.077: 92.7083% ( 18) 00:08:21.691 12603.077 - 12653.489: 92.8385% ( 19) 00:08:21.691 12653.489 - 12703.902: 92.9825% ( 21) 00:08:21.691 12703.902 - 12754.314: 93.0921% ( 16) 00:08:21.691 12754.314 - 12804.726: 93.2086% ( 17) 00:08:21.691 12804.726 - 12855.138: 93.3183% ( 16) 00:08:21.691 12855.138 - 12905.551: 93.4142% ( 14) 00:08:21.691 12905.551 - 13006.375: 93.6472% ( 34) 00:08:21.691 13006.375 - 13107.200: 93.8391% ( 28) 00:08:21.691 13107.200 - 13208.025: 93.9899% ( 22) 00:08:21.691 13208.025 - 13308.849: 94.1406% ( 22) 00:08:21.691 13308.849 - 13409.674: 94.3668% ( 33) 00:08:21.691 13409.674 - 13510.498: 94.6272% ( 38) 00:08:21.691 13510.498 - 13611.323: 94.8533% ( 33) 00:08:21.691 13611.323 - 13712.148: 95.0521% ( 29) 00:08:21.691 13712.148 - 13812.972: 95.3125% ( 38) 00:08:21.691 13812.972 - 13913.797: 95.5798% ( 39) 00:08:21.691 13913.797 - 14014.622: 95.8402% ( 38) 00:08:21.691 14014.622 - 14115.446: 96.0663% ( 33) 00:08:21.691 14115.446 - 14216.271: 96.3473% ( 41) 00:08:21.691 14216.271 - 14317.095: 96.6420% ( 43) 00:08:21.691 14317.095 - 14417.920: 96.8407% ( 29) 00:08:21.691 14417.920 - 14518.745: 97.1149% ( 40) 00:08:21.691 14518.745 - 14619.569: 97.2793% ( 24) 00:08:21.691 14619.569 - 14720.394: 97.3890% ( 16) 00:08:21.691 14720.394 - 14821.218: 97.5260% ( 20) 00:08:21.691 14821.218 - 14922.043: 97.6357% ( 16) 00:08:21.691 14922.043 - 15022.868: 97.7179% ( 12) 00:08:21.691 15022.868 - 15123.692: 97.8276% ( 16) 00:08:21.691 15123.692 - 15224.517: 97.9372% ( 16) 00:08:21.691 15224.517 - 15325.342: 97.9989% ( 9) 00:08:21.691 15325.342 - 15426.166: 98.1154% ( 17) 00:08:21.691 15426.166 - 15526.991: 98.2251% ( 16) 00:08:21.691 15526.991 - 15627.815: 98.3004% ( 11) 00:08:21.691 15627.815 - 15728.640: 98.3347% ( 5) 00:08:21.691 15728.640 - 15829.465: 98.3621% ( 4) 00:08:21.691 15829.465 - 15930.289: 98.3895% ( 4) 00:08:21.691 15930.289 - 16031.114: 98.4306% ( 6) 00:08:21.691 16031.114 - 16131.938: 98.4581% ( 4) 00:08:21.691 16131.938 - 16232.763: 98.4923% ( 5) 00:08:21.691 16232.763 - 16333.588: 98.5266% ( 5) 00:08:21.691 16333.588 - 16434.412: 98.5471% ( 3) 00:08:21.691 16434.412 - 16535.237: 98.5814% ( 5) 00:08:21.691 16535.237 - 16636.062: 98.6020% ( 3) 00:08:21.691 16636.062 - 16736.886: 98.6431% ( 6) 00:08:21.691 16736.886 - 16837.711: 98.6842% ( 6) 00:08:21.691 17644.308 - 17745.132: 98.7116% ( 4) 00:08:21.691 17745.132 - 17845.957: 98.7527% ( 6) 00:08:21.691 17845.957 - 17946.782: 98.7870% ( 5) 00:08:21.691 17946.782 - 18047.606: 98.8144% ( 4) 00:08:21.691 18047.606 - 18148.431: 98.8487% ( 5) 00:08:21.691 18148.431 - 18249.255: 98.8692% ( 3) 00:08:21.691 18249.255 - 18350.080: 98.9104% ( 6) 00:08:21.691 18350.080 - 18450.905: 98.9515% ( 6) 00:08:21.691 18450.905 - 18551.729: 98.9857% ( 5) 00:08:21.691 18551.729 - 18652.554: 99.0269% ( 6) 00:08:21.691 18652.554 - 18753.378: 99.0543% ( 4) 00:08:21.691 18753.378 - 18854.203: 99.0885% ( 5) 00:08:21.691 18854.203 - 18955.028: 99.1228% ( 5) 00:08:21.691 24802.855 - 24903.680: 99.1502% ( 4) 00:08:21.691 24903.680 - 25004.505: 99.1982% ( 7) 00:08:21.691 25004.505 - 25105.329: 99.2462% ( 7) 00:08:21.691 25105.329 - 25206.154: 99.2736% ( 4) 00:08:21.691 25206.154 - 25306.978: 99.3078% ( 5) 00:08:21.691 25306.978 - 25407.803: 99.3490% ( 6) 00:08:21.691 25407.803 - 25508.628: 99.3901% ( 6) 00:08:21.691 25508.628 - 25609.452: 99.4449% ( 8) 00:08:21.692 25609.452 - 25710.277: 99.4723% ( 4) 00:08:21.692 25710.277 - 25811.102: 99.5271% ( 8) 00:08:21.692 25811.102 - 26012.751: 99.5614% ( 5) 00:08:21.692 29642.437 - 29844.086: 99.6162% ( 8) 00:08:21.692 29844.086 - 30045.735: 99.6916% ( 11) 00:08:21.692 30045.735 - 30247.385: 99.7396% ( 7) 00:08:21.692 30247.385 - 30449.034: 99.7944% ( 8) 00:08:21.692 30449.034 - 30650.683: 99.8835% ( 13) 00:08:21.692 30650.683 - 30852.332: 99.9589% ( 11) 00:08:21.692 30852.332 - 31053.982: 100.0000% ( 6) 00:08:21.692 00:08:21.692 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:21.692 ============================================================================== 00:08:21.692 Range in us Cumulative IO count 00:08:21.692 4436.283 - 4461.489: 0.0137% ( 2) 00:08:21.692 4461.489 - 4486.695: 0.0959% ( 12) 00:08:21.692 4486.695 - 4511.902: 0.1371% ( 6) 00:08:21.692 4511.902 - 4537.108: 0.1439% ( 1) 00:08:21.692 4537.108 - 4562.314: 0.1508% ( 1) 00:08:21.692 4562.314 - 4587.520: 0.1576% ( 1) 00:08:21.692 4612.726 - 4637.932: 0.1713% ( 2) 00:08:21.692 4637.932 - 4663.138: 0.1850% ( 2) 00:08:21.692 4663.138 - 4688.345: 0.2056% ( 3) 00:08:21.692 4688.345 - 4713.551: 0.2193% ( 2) 00:08:21.692 4713.551 - 4738.757: 0.2330% ( 2) 00:08:21.692 4738.757 - 4763.963: 0.2536% ( 3) 00:08:21.692 4763.963 - 4789.169: 0.2673% ( 2) 00:08:21.692 4789.169 - 4814.375: 0.2810% ( 2) 00:08:21.692 4814.375 - 4839.582: 0.3015% ( 3) 00:08:21.692 4839.582 - 4864.788: 0.3152% ( 2) 00:08:21.692 4864.788 - 4889.994: 0.3289% ( 2) 00:08:21.692 4889.994 - 4915.200: 0.3495% ( 3) 00:08:21.692 4915.200 - 4940.406: 0.3632% ( 2) 00:08:21.692 4940.406 - 4965.612: 0.3769% ( 2) 00:08:21.692 4965.612 - 4990.818: 0.3906% ( 2) 00:08:21.692 4990.818 - 5016.025: 0.4112% ( 3) 00:08:21.692 5016.025 - 5041.231: 0.4180% ( 1) 00:08:21.692 5041.231 - 5066.437: 0.4317% ( 2) 00:08:21.692 5066.437 - 5091.643: 0.4386% ( 1) 00:08:21.692 5822.622 - 5847.828: 0.4523% ( 2) 00:08:21.692 5847.828 - 5873.034: 0.4660% ( 2) 00:08:21.692 5873.034 - 5898.240: 0.5277% ( 9) 00:08:21.692 5898.240 - 5923.446: 0.6236% ( 14) 00:08:21.692 5923.446 - 5948.652: 0.7675% ( 21) 00:08:21.692 5948.652 - 5973.858: 0.8703% ( 15) 00:08:21.692 5973.858 - 5999.065: 1.1102% ( 35) 00:08:21.692 5999.065 - 6024.271: 1.4391% ( 48) 00:08:21.692 6024.271 - 6049.477: 2.0422% ( 88) 00:08:21.692 6049.477 - 6074.683: 2.8714% ( 121) 00:08:21.692 6074.683 - 6099.889: 3.5156% ( 94) 00:08:21.692 6099.889 - 6125.095: 4.3243% ( 118) 00:08:21.692 6125.095 - 6150.302: 5.1535% ( 121) 00:08:21.692 6150.302 - 6175.508: 6.1678% ( 148) 00:08:21.692 6175.508 - 6200.714: 7.2026% ( 151) 00:08:21.692 6200.714 - 6225.920: 8.3196% ( 163) 00:08:21.692 6225.920 - 6251.126: 9.7245% ( 205) 00:08:21.692 6251.126 - 6276.332: 10.9306% ( 176) 00:08:21.692 6276.332 - 6301.538: 12.0203% ( 159) 00:08:21.692 6301.538 - 6326.745: 13.1510% ( 165) 00:08:21.692 6326.745 - 6351.951: 14.2749% ( 164) 00:08:21.692 6351.951 - 6377.157: 15.4674% ( 174) 00:08:21.692 6377.157 - 6402.363: 16.6255% ( 169) 00:08:21.692 6402.363 - 6427.569: 17.8865% ( 184) 00:08:21.692 6427.569 - 6452.775: 19.0789% ( 174) 00:08:21.692 6452.775 - 6503.188: 21.4433% ( 345) 00:08:21.692 6503.188 - 6553.600: 23.9720% ( 369) 00:08:21.692 6553.600 - 6604.012: 26.3912% ( 353) 00:08:21.692 6604.012 - 6654.425: 28.8172% ( 354) 00:08:21.692 6654.425 - 6704.837: 31.1609% ( 342) 00:08:21.692 6704.837 - 6755.249: 33.5184% ( 344) 00:08:21.692 6755.249 - 6805.662: 35.6771% ( 315) 00:08:21.692 6805.662 - 6856.074: 37.7056% ( 296) 00:08:21.692 6856.074 - 6906.486: 39.3366% ( 238) 00:08:21.692 6906.486 - 6956.898: 40.4948% ( 169) 00:08:21.692 6956.898 - 7007.311: 41.3514% ( 125) 00:08:21.692 7007.311 - 7057.723: 41.8517% ( 73) 00:08:21.692 7057.723 - 7108.135: 42.1875% ( 49) 00:08:21.692 7108.135 - 7158.548: 42.4890% ( 44) 00:08:21.692 7158.548 - 7208.960: 42.7563% ( 39) 00:08:21.692 7208.960 - 7259.372: 42.9550% ( 29) 00:08:21.692 7259.372 - 7309.785: 43.1743% ( 32) 00:08:21.692 7309.785 - 7360.197: 43.3731% ( 29) 00:08:21.692 7360.197 - 7410.609: 43.5718% ( 29) 00:08:21.692 7410.609 - 7461.022: 43.7500% ( 26) 00:08:21.692 7461.022 - 7511.434: 43.9762% ( 33) 00:08:21.692 7511.434 - 7561.846: 44.1406% ( 24) 00:08:21.692 7561.846 - 7612.258: 44.3051% ( 24) 00:08:21.692 7612.258 - 7662.671: 44.4764% ( 25) 00:08:21.692 7662.671 - 7713.083: 44.6135% ( 20) 00:08:21.692 7713.083 - 7763.495: 44.7505% ( 20) 00:08:21.692 7763.495 - 7813.908: 44.8465% ( 14) 00:08:21.692 7813.908 - 7864.320: 44.9356% ( 13) 00:08:21.692 7864.320 - 7914.732: 45.0110% ( 11) 00:08:21.692 7914.732 - 7965.145: 45.0863% ( 11) 00:08:21.692 7965.145 - 8015.557: 45.1960% ( 16) 00:08:21.692 8015.557 - 8065.969: 45.2851% ( 13) 00:08:21.692 8065.969 - 8116.382: 45.4084% ( 18) 00:08:21.692 8116.382 - 8166.794: 45.5387% ( 19) 00:08:21.692 8166.794 - 8217.206: 45.6757% ( 20) 00:08:21.692 8217.206 - 8267.618: 45.8333% ( 23) 00:08:21.692 8267.618 - 8318.031: 46.0047% ( 25) 00:08:21.692 8318.031 - 8368.443: 46.2856% ( 41) 00:08:21.692 8368.443 - 8418.855: 46.5323% ( 36) 00:08:21.692 8418.855 - 8469.268: 46.7448% ( 31) 00:08:21.692 8469.268 - 8519.680: 46.9435% ( 29) 00:08:21.692 8519.680 - 8570.092: 47.1697% ( 33) 00:08:21.692 8570.092 - 8620.505: 47.3547% ( 27) 00:08:21.692 8620.505 - 8670.917: 47.6974% ( 50) 00:08:21.692 8670.917 - 8721.329: 48.0469% ( 51) 00:08:21.692 8721.329 - 8771.742: 48.5677% ( 76) 00:08:21.692 8771.742 - 8822.154: 49.0885% ( 76) 00:08:21.692 8822.154 - 8872.566: 49.7944% ( 103) 00:08:21.692 8872.566 - 8922.978: 50.7059% ( 133) 00:08:21.692 8922.978 - 8973.391: 51.7749% ( 156) 00:08:21.692 8973.391 - 9023.803: 52.8372% ( 155) 00:08:21.692 9023.803 - 9074.215: 54.0502% ( 177) 00:08:21.692 9074.215 - 9124.628: 55.2837% ( 180) 00:08:21.692 9124.628 - 9175.040: 56.6543% ( 200) 00:08:21.692 9175.040 - 9225.452: 57.9153% ( 184) 00:08:21.692 9225.452 - 9275.865: 59.2928% ( 201) 00:08:21.692 9275.865 - 9326.277: 60.8004% ( 220) 00:08:21.692 9326.277 - 9376.689: 62.3218% ( 222) 00:08:21.692 9376.689 - 9427.102: 63.8501% ( 223) 00:08:21.692 9427.102 - 9477.514: 65.4263% ( 230) 00:08:21.692 9477.514 - 9527.926: 66.9134% ( 217) 00:08:21.692 9527.926 - 9578.338: 68.4348% ( 222) 00:08:21.692 9578.338 - 9628.751: 69.9493% ( 221) 00:08:21.692 9628.751 - 9679.163: 71.2719% ( 193) 00:08:21.692 9679.163 - 9729.575: 72.5466% ( 186) 00:08:21.692 9729.575 - 9779.988: 73.6294% ( 158) 00:08:21.692 9779.988 - 9830.400: 74.6642% ( 151) 00:08:21.692 9830.400 - 9880.812: 75.6168% ( 139) 00:08:21.692 9880.812 - 9931.225: 76.4529% ( 122) 00:08:21.692 9931.225 - 9981.637: 77.2684% ( 119) 00:08:21.692 9981.637 - 10032.049: 77.9605% ( 101) 00:08:21.692 10032.049 - 10082.462: 78.6801% ( 105) 00:08:21.692 10082.462 - 10132.874: 79.3448% ( 97) 00:08:21.692 10132.874 - 10183.286: 79.8999% ( 81) 00:08:21.692 10183.286 - 10233.698: 80.4413% ( 79) 00:08:21.692 10233.698 - 10284.111: 80.9211% ( 70) 00:08:21.692 10284.111 - 10334.523: 81.3528% ( 63) 00:08:21.692 10334.523 - 10384.935: 81.8188% ( 68) 00:08:21.692 10384.935 - 10435.348: 82.2231% ( 59) 00:08:21.692 10435.348 - 10485.760: 82.6206% ( 58) 00:08:21.692 10485.760 - 10536.172: 83.0935% ( 69) 00:08:21.692 10536.172 - 10586.585: 83.5115% ( 61) 00:08:21.692 10586.585 - 10636.997: 83.8679% ( 52) 00:08:21.692 10636.997 - 10687.409: 84.2311% ( 53) 00:08:21.692 10687.409 - 10737.822: 84.6354% ( 59) 00:08:21.692 10737.822 - 10788.234: 85.0535% ( 61) 00:08:21.692 10788.234 - 10838.646: 85.4030% ( 51) 00:08:21.692 10838.646 - 10889.058: 85.7319% ( 48) 00:08:21.692 10889.058 - 10939.471: 86.1362% ( 59) 00:08:21.692 10939.471 - 10989.883: 86.4720% ( 49) 00:08:21.692 10989.883 - 11040.295: 86.7804% ( 45) 00:08:21.692 11040.295 - 11090.708: 87.0751% ( 43) 00:08:21.692 11090.708 - 11141.120: 87.3835% ( 45) 00:08:21.692 11141.120 - 11191.532: 87.6713% ( 42) 00:08:21.692 11191.532 - 11241.945: 87.9934% ( 47) 00:08:21.692 11241.945 - 11292.357: 88.2744% ( 41) 00:08:21.692 11292.357 - 11342.769: 88.5211% ( 36) 00:08:21.692 11342.769 - 11393.182: 88.7130% ( 28) 00:08:21.692 11393.182 - 11443.594: 88.8912% ( 26) 00:08:21.692 11443.594 - 11494.006: 89.0694% ( 26) 00:08:21.692 11494.006 - 11544.418: 89.2407% ( 25) 00:08:21.692 11544.418 - 11594.831: 89.4257% ( 27) 00:08:21.692 11594.831 - 11645.243: 89.6450% ( 32) 00:08:21.692 11645.243 - 11695.655: 89.8369% ( 28) 00:08:21.692 11695.655 - 11746.068: 89.9945% ( 23) 00:08:21.692 11746.068 - 11796.480: 90.1590% ( 24) 00:08:21.692 11796.480 - 11846.892: 90.3029% ( 21) 00:08:21.692 11846.892 - 11897.305: 90.4331% ( 19) 00:08:21.692 11897.305 - 11947.717: 90.5770% ( 21) 00:08:21.692 11947.717 - 11998.129: 90.7621% ( 27) 00:08:21.692 11998.129 - 12048.542: 90.9265% ( 24) 00:08:21.692 12048.542 - 12098.954: 91.1253% ( 29) 00:08:21.692 12098.954 - 12149.366: 91.2760% ( 22) 00:08:21.692 12149.366 - 12199.778: 91.4611% ( 27) 00:08:21.692 12199.778 - 12250.191: 91.6324% ( 25) 00:08:21.692 12250.191 - 12300.603: 91.7763% ( 21) 00:08:21.692 12300.603 - 12351.015: 91.9202% ( 21) 00:08:21.692 12351.015 - 12401.428: 92.1053% ( 27) 00:08:21.692 12401.428 - 12451.840: 92.2423% ( 20) 00:08:21.692 12451.840 - 12502.252: 92.4068% ( 24) 00:08:21.692 12502.252 - 12552.665: 92.5302% ( 18) 00:08:21.692 12552.665 - 12603.077: 92.6672% ( 20) 00:08:21.692 12603.077 - 12653.489: 92.8248% ( 23) 00:08:21.692 12653.489 - 12703.902: 92.9550% ( 19) 00:08:21.692 12703.902 - 12754.314: 93.0921% ( 20) 00:08:21.692 12754.314 - 12804.726: 93.2360% ( 21) 00:08:21.693 12804.726 - 12855.138: 93.3457% ( 16) 00:08:21.693 12855.138 - 12905.551: 93.4896% ( 21) 00:08:21.693 12905.551 - 13006.375: 93.7706% ( 41) 00:08:21.693 13006.375 - 13107.200: 94.0789% ( 45) 00:08:21.693 13107.200 - 13208.025: 94.4490% ( 54) 00:08:21.693 13208.025 - 13308.849: 94.7780% ( 48) 00:08:21.693 13308.849 - 13409.674: 95.0247% ( 36) 00:08:21.693 13409.674 - 13510.498: 95.2440% ( 32) 00:08:21.693 13510.498 - 13611.323: 95.4427% ( 29) 00:08:21.693 13611.323 - 13712.148: 95.6757% ( 34) 00:08:21.693 13712.148 - 13812.972: 95.8950% ( 32) 00:08:21.693 13812.972 - 13913.797: 96.0663% ( 25) 00:08:21.693 13913.797 - 14014.622: 96.2034% ( 20) 00:08:21.693 14014.622 - 14115.446: 96.2993% ( 14) 00:08:21.693 14115.446 - 14216.271: 96.4296% ( 19) 00:08:21.693 14216.271 - 14317.095: 96.5118% ( 12) 00:08:21.693 14317.095 - 14417.920: 96.6077% ( 14) 00:08:21.693 14417.920 - 14518.745: 96.6900% ( 12) 00:08:21.693 14518.745 - 14619.569: 96.7379% ( 7) 00:08:21.693 14619.569 - 14720.394: 96.7928% ( 8) 00:08:21.693 14720.394 - 14821.218: 96.9024% ( 16) 00:08:21.693 14821.218 - 14922.043: 97.0326% ( 19) 00:08:21.693 14922.043 - 15022.868: 97.2039% ( 25) 00:08:21.693 15022.868 - 15123.692: 97.4027% ( 29) 00:08:21.693 15123.692 - 15224.517: 97.5397% ( 20) 00:08:21.693 15224.517 - 15325.342: 97.7179% ( 26) 00:08:21.693 15325.342 - 15426.166: 97.8893% ( 25) 00:08:21.693 15426.166 - 15526.991: 98.0537% ( 24) 00:08:21.693 15526.991 - 15627.815: 98.1908% ( 20) 00:08:21.693 15627.815 - 15728.640: 98.3141% ( 18) 00:08:21.693 15728.640 - 15829.465: 98.4238% ( 16) 00:08:21.693 15829.465 - 15930.289: 98.5266% ( 15) 00:08:21.693 15930.289 - 16031.114: 98.5814% ( 8) 00:08:21.693 16031.114 - 16131.938: 98.6225% ( 6) 00:08:21.693 16131.938 - 16232.763: 98.6568% ( 5) 00:08:21.693 16232.763 - 16333.588: 98.6842% ( 4) 00:08:21.693 17341.834 - 17442.658: 98.6979% ( 2) 00:08:21.693 17442.658 - 17543.483: 98.7390% ( 6) 00:08:21.693 17543.483 - 17644.308: 98.7870% ( 7) 00:08:21.693 17644.308 - 17745.132: 98.8213% ( 5) 00:08:21.693 17745.132 - 17845.957: 98.8692% ( 7) 00:08:21.693 17845.957 - 17946.782: 98.9172% ( 7) 00:08:21.693 17946.782 - 18047.606: 98.9652% ( 7) 00:08:21.693 18047.606 - 18148.431: 99.0132% ( 7) 00:08:21.693 18148.431 - 18249.255: 99.0611% ( 7) 00:08:21.693 18249.255 - 18350.080: 99.1091% ( 7) 00:08:21.693 18350.080 - 18450.905: 99.1228% ( 2) 00:08:21.693 24500.382 - 24601.206: 99.1365% ( 2) 00:08:21.693 24601.206 - 24702.031: 99.1776% ( 6) 00:08:21.693 24702.031 - 24802.855: 99.2119% ( 5) 00:08:21.693 24802.855 - 24903.680: 99.2599% ( 7) 00:08:21.693 24903.680 - 25004.505: 99.3078% ( 7) 00:08:21.693 25004.505 - 25105.329: 99.3558% ( 7) 00:08:21.693 25105.329 - 25206.154: 99.4038% ( 7) 00:08:21.693 25206.154 - 25306.978: 99.4518% ( 7) 00:08:21.693 25306.978 - 25407.803: 99.4997% ( 7) 00:08:21.693 25407.803 - 25508.628: 99.5477% ( 7) 00:08:21.693 25508.628 - 25609.452: 99.5614% ( 2) 00:08:21.693 29239.138 - 29440.788: 99.6231% ( 9) 00:08:21.693 29440.788 - 29642.437: 99.7122% ( 13) 00:08:21.693 29642.437 - 29844.086: 99.8013% ( 13) 00:08:21.693 29844.086 - 30045.735: 99.8972% ( 14) 00:08:21.693 30045.735 - 30247.385: 99.9794% ( 12) 00:08:21.693 30247.385 - 30449.034: 100.0000% ( 3) 00:08:21.693 00:08:21.693 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:21.693 ============================================================================== 00:08:21.693 Range in us Cumulative IO count 00:08:21.693 3906.954 - 3932.160: 0.0069% ( 1) 00:08:21.693 3932.160 - 3957.366: 0.0685% ( 9) 00:08:21.693 3957.366 - 3982.572: 0.0959% ( 4) 00:08:21.693 3982.572 - 4007.778: 0.1165% ( 3) 00:08:21.693 4007.778 - 4032.985: 0.1302% ( 2) 00:08:21.693 4032.985 - 4058.191: 0.1371% ( 1) 00:08:21.693 4083.397 - 4108.603: 0.1576% ( 3) 00:08:21.693 4108.603 - 4133.809: 0.1713% ( 2) 00:08:21.693 4133.809 - 4159.015: 0.1919% ( 3) 00:08:21.693 4159.015 - 4184.222: 0.2056% ( 2) 00:08:21.693 4184.222 - 4209.428: 0.2193% ( 2) 00:08:21.693 4209.428 - 4234.634: 0.2330% ( 2) 00:08:21.693 4234.634 - 4259.840: 0.2536% ( 3) 00:08:21.693 4259.840 - 4285.046: 0.2673% ( 2) 00:08:21.693 4285.046 - 4310.252: 0.2878% ( 3) 00:08:21.693 4310.252 - 4335.458: 0.3015% ( 2) 00:08:21.693 4335.458 - 4360.665: 0.3152% ( 2) 00:08:21.693 4360.665 - 4385.871: 0.3358% ( 3) 00:08:21.693 4385.871 - 4411.077: 0.3495% ( 2) 00:08:21.693 4411.077 - 4436.283: 0.3701% ( 3) 00:08:21.693 4436.283 - 4461.489: 0.3838% ( 2) 00:08:21.693 4461.489 - 4486.695: 0.3975% ( 2) 00:08:21.693 4486.695 - 4511.902: 0.4180% ( 3) 00:08:21.693 4511.902 - 4537.108: 0.4317% ( 2) 00:08:21.693 4537.108 - 4562.314: 0.4386% ( 1) 00:08:21.693 5394.117 - 5419.323: 0.4592% ( 3) 00:08:21.693 5419.323 - 5444.529: 0.5277% ( 10) 00:08:21.693 5444.529 - 5469.735: 0.5414% ( 2) 00:08:21.693 5469.735 - 5494.942: 0.5482% ( 1) 00:08:21.693 5494.942 - 5520.148: 0.5620% ( 2) 00:08:21.693 5545.354 - 5570.560: 0.5825% ( 3) 00:08:21.693 5570.560 - 5595.766: 0.5962% ( 2) 00:08:21.693 5595.766 - 5620.972: 0.6168% ( 3) 00:08:21.693 5620.972 - 5646.178: 0.6305% ( 2) 00:08:21.693 5646.178 - 5671.385: 0.6442% ( 2) 00:08:21.693 5671.385 - 5696.591: 0.6647% ( 3) 00:08:21.693 5696.591 - 5721.797: 0.6785% ( 2) 00:08:21.693 5721.797 - 5747.003: 0.6990% ( 3) 00:08:21.693 5747.003 - 5772.209: 0.7127% ( 2) 00:08:21.693 5772.209 - 5797.415: 0.7264% ( 2) 00:08:21.693 5797.415 - 5822.622: 0.7470% ( 3) 00:08:21.693 5822.622 - 5847.828: 0.7607% ( 2) 00:08:21.693 5847.828 - 5873.034: 0.7950% ( 5) 00:08:21.693 5873.034 - 5898.240: 0.8840% ( 13) 00:08:21.693 5898.240 - 5923.446: 0.9663% ( 12) 00:08:21.693 5923.446 - 5948.652: 1.0485% ( 12) 00:08:21.693 5948.652 - 5973.858: 1.2336% ( 27) 00:08:21.693 5973.858 - 5999.065: 1.4871% ( 37) 00:08:21.693 5999.065 - 6024.271: 1.8914% ( 59) 00:08:21.693 6024.271 - 6049.477: 2.5014% ( 89) 00:08:21.693 6049.477 - 6074.683: 3.2072% ( 103) 00:08:21.693 6074.683 - 6099.889: 3.9611% ( 110) 00:08:21.693 6099.889 - 6125.095: 4.6738% ( 104) 00:08:21.693 6125.095 - 6150.302: 5.4002% ( 106) 00:08:21.693 6150.302 - 6175.508: 6.3596% ( 140) 00:08:21.693 6175.508 - 6200.714: 7.3191% ( 140) 00:08:21.693 6200.714 - 6225.920: 8.4910% ( 171) 00:08:21.693 6225.920 - 6251.126: 9.7793% ( 188) 00:08:21.693 6251.126 - 6276.332: 11.0609% ( 187) 00:08:21.693 6276.332 - 6301.538: 12.3150% ( 183) 00:08:21.693 6301.538 - 6326.745: 13.4046% ( 159) 00:08:21.693 6326.745 - 6351.951: 14.4531% ( 153) 00:08:21.693 6351.951 - 6377.157: 15.5565% ( 161) 00:08:21.693 6377.157 - 6402.363: 16.8037% ( 182) 00:08:21.693 6402.363 - 6427.569: 18.0167% ( 177) 00:08:21.693 6427.569 - 6452.775: 19.2023% ( 173) 00:08:21.693 6452.775 - 6503.188: 21.6214% ( 353) 00:08:21.693 6503.188 - 6553.600: 24.0406% ( 353) 00:08:21.693 6553.600 - 6604.012: 26.4597% ( 353) 00:08:21.693 6604.012 - 6654.425: 29.0022% ( 371) 00:08:21.693 6654.425 - 6704.837: 31.4556% ( 358) 00:08:21.693 6704.837 - 6755.249: 33.7651% ( 337) 00:08:21.693 6755.249 - 6805.662: 35.9786% ( 323) 00:08:21.693 6805.662 - 6856.074: 37.9660% ( 290) 00:08:21.693 6856.074 - 6906.486: 39.5491% ( 231) 00:08:21.693 6906.486 - 6956.898: 40.6935% ( 167) 00:08:21.693 6956.898 - 7007.311: 41.5981% ( 132) 00:08:21.693 7007.311 - 7057.723: 42.2286% ( 92) 00:08:21.693 7057.723 - 7108.135: 42.5781% ( 51) 00:08:21.693 7108.135 - 7158.548: 42.8591% ( 41) 00:08:21.693 7158.548 - 7208.960: 43.1332% ( 40) 00:08:21.693 7208.960 - 7259.372: 43.3936% ( 38) 00:08:21.693 7259.372 - 7309.785: 43.6129% ( 32) 00:08:21.693 7309.785 - 7360.197: 43.8391% ( 33) 00:08:21.693 7360.197 - 7410.609: 44.0515% ( 31) 00:08:21.693 7410.609 - 7461.022: 44.2366% ( 27) 00:08:21.693 7461.022 - 7511.434: 44.4147% ( 26) 00:08:21.693 7511.434 - 7561.846: 44.5724% ( 23) 00:08:21.693 7561.846 - 7612.258: 44.7917% ( 32) 00:08:21.693 7612.258 - 7662.671: 44.9767% ( 27) 00:08:21.693 7662.671 - 7713.083: 45.1001% ( 18) 00:08:21.693 7713.083 - 7763.495: 45.1891% ( 13) 00:08:21.693 7763.495 - 7813.908: 45.2919% ( 15) 00:08:21.693 7813.908 - 7864.320: 45.3947% ( 15) 00:08:21.693 7864.320 - 7914.732: 45.4907% ( 14) 00:08:21.693 7914.732 - 7965.145: 45.5798% ( 13) 00:08:21.693 7965.145 - 8015.557: 45.6620% ( 12) 00:08:21.693 8015.557 - 8065.969: 45.7374% ( 11) 00:08:21.693 8065.969 - 8116.382: 45.7922% ( 8) 00:08:21.693 8116.382 - 8166.794: 45.8402% ( 7) 00:08:21.693 8166.794 - 8217.206: 45.8882% ( 7) 00:08:21.693 8217.206 - 8267.618: 45.9910% ( 15) 00:08:21.693 8267.618 - 8318.031: 46.1417% ( 22) 00:08:21.693 8318.031 - 8368.443: 46.3199% ( 26) 00:08:21.693 8368.443 - 8418.855: 46.4981% ( 26) 00:08:21.693 8418.855 - 8469.268: 46.7379% ( 35) 00:08:21.693 8469.268 - 8519.680: 46.9504% ( 31) 00:08:21.693 8519.680 - 8570.092: 47.1834% ( 34) 00:08:21.693 8570.092 - 8620.505: 47.5946% ( 60) 00:08:21.693 8620.505 - 8670.917: 48.0674% ( 69) 00:08:21.693 8670.917 - 8721.329: 48.6294% ( 82) 00:08:21.693 8721.329 - 8771.742: 49.2256% ( 87) 00:08:21.693 8771.742 - 8822.154: 49.9589% ( 107) 00:08:21.693 8822.154 - 8872.566: 50.7675% ( 118) 00:08:21.693 8872.566 - 8922.978: 51.6996% ( 136) 00:08:21.693 8922.978 - 8973.391: 52.7001% ( 146) 00:08:21.693 8973.391 - 9023.803: 53.8377% ( 166) 00:08:21.693 9023.803 - 9074.215: 55.0096% ( 171) 00:08:21.693 9074.215 - 9124.628: 56.2020% ( 174) 00:08:21.693 9124.628 - 9175.040: 57.6823% ( 216) 00:08:21.693 9175.040 - 9225.452: 58.9638% ( 187) 00:08:21.693 9225.452 - 9275.865: 60.1562% ( 174) 00:08:21.694 9275.865 - 9326.277: 61.4515% ( 189) 00:08:21.694 9326.277 - 9376.689: 62.9043% ( 212) 00:08:21.694 9376.689 - 9427.102: 64.1996% ( 189) 00:08:21.694 9427.102 - 9477.514: 65.4674% ( 185) 00:08:21.694 9477.514 - 9527.926: 66.7352% ( 185) 00:08:21.694 9527.926 - 9578.338: 68.0236% ( 188) 00:08:21.694 9578.338 - 9628.751: 69.2571% ( 180) 00:08:21.694 9628.751 - 9679.163: 70.4427% ( 173) 00:08:21.694 9679.163 - 9729.575: 71.6488% ( 176) 00:08:21.694 9729.575 - 9779.988: 72.7385% ( 159) 00:08:21.694 9779.988 - 9830.400: 73.8007% ( 155) 00:08:21.694 9830.400 - 9880.812: 74.8081% ( 147) 00:08:21.694 9880.812 - 9931.225: 75.6922% ( 129) 00:08:21.694 9931.225 - 9981.637: 76.4734% ( 114) 00:08:21.694 9981.637 - 10032.049: 77.1450% ( 98) 00:08:21.694 10032.049 - 10082.462: 77.8714% ( 106) 00:08:21.694 10082.462 - 10132.874: 78.6116% ( 108) 00:08:21.694 10132.874 - 10183.286: 79.2215% ( 89) 00:08:21.694 10183.286 - 10233.698: 79.7971% ( 84) 00:08:21.694 10233.698 - 10284.111: 80.3248% ( 77) 00:08:21.694 10284.111 - 10334.523: 80.8388% ( 75) 00:08:21.694 10334.523 - 10384.935: 81.3528% ( 75) 00:08:21.694 10384.935 - 10435.348: 81.8736% ( 76) 00:08:21.694 10435.348 - 10485.760: 82.4150% ( 79) 00:08:21.694 10485.760 - 10536.172: 82.8879% ( 69) 00:08:21.694 10536.172 - 10586.585: 83.3882% ( 73) 00:08:21.694 10586.585 - 10636.997: 83.8542% ( 68) 00:08:21.694 10636.997 - 10687.409: 84.2585% ( 59) 00:08:21.694 10687.409 - 10737.822: 84.6423% ( 56) 00:08:21.694 10737.822 - 10788.234: 85.0877% ( 65) 00:08:21.694 10788.234 - 10838.646: 85.4372% ( 51) 00:08:21.694 10838.646 - 10889.058: 85.8416% ( 59) 00:08:21.694 10889.058 - 10939.471: 86.2116% ( 54) 00:08:21.694 10939.471 - 10989.883: 86.5337% ( 47) 00:08:21.694 10989.883 - 11040.295: 86.8078% ( 40) 00:08:21.694 11040.295 - 11090.708: 87.0820% ( 40) 00:08:21.694 11090.708 - 11141.120: 87.3698% ( 42) 00:08:21.694 11141.120 - 11191.532: 87.6234% ( 37) 00:08:21.694 11191.532 - 11241.945: 87.8701% ( 36) 00:08:21.694 11241.945 - 11292.357: 88.1236% ( 37) 00:08:21.694 11292.357 - 11342.769: 88.3498% ( 33) 00:08:21.694 11342.769 - 11393.182: 88.5485% ( 29) 00:08:21.694 11393.182 - 11443.594: 88.7336% ( 27) 00:08:21.694 11443.594 - 11494.006: 88.8706% ( 20) 00:08:21.694 11494.006 - 11544.418: 89.0145% ( 21) 00:08:21.694 11544.418 - 11594.831: 89.1242% ( 16) 00:08:21.694 11594.831 - 11645.243: 89.2338% ( 16) 00:08:21.694 11645.243 - 11695.655: 89.3435% ( 16) 00:08:21.694 11695.655 - 11746.068: 89.4737% ( 19) 00:08:21.694 11746.068 - 11796.480: 89.6587% ( 27) 00:08:21.694 11796.480 - 11846.892: 89.8026% ( 21) 00:08:21.694 11846.892 - 11897.305: 89.9603% ( 23) 00:08:21.694 11897.305 - 11947.717: 90.1247% ( 24) 00:08:21.694 11947.717 - 11998.129: 90.2618% ( 20) 00:08:21.694 11998.129 - 12048.542: 90.3988% ( 20) 00:08:21.694 12048.542 - 12098.954: 90.5907% ( 28) 00:08:21.694 12098.954 - 12149.366: 90.8032% ( 31) 00:08:21.694 12149.366 - 12199.778: 90.9814% ( 26) 00:08:21.694 12199.778 - 12250.191: 91.1595% ( 26) 00:08:21.694 12250.191 - 12300.603: 91.4337% ( 40) 00:08:21.694 12300.603 - 12351.015: 91.6735% ( 35) 00:08:21.694 12351.015 - 12401.428: 91.8997% ( 33) 00:08:21.694 12401.428 - 12451.840: 92.0916% ( 28) 00:08:21.694 12451.840 - 12502.252: 92.3246% ( 34) 00:08:21.694 12502.252 - 12552.665: 92.5439% ( 32) 00:08:21.694 12552.665 - 12603.077: 92.7837% ( 35) 00:08:21.694 12603.077 - 12653.489: 93.0030% ( 32) 00:08:21.694 12653.489 - 12703.902: 93.2086% ( 30) 00:08:21.694 12703.902 - 12754.314: 93.3799% ( 25) 00:08:21.694 12754.314 - 12804.726: 93.5376% ( 23) 00:08:21.694 12804.726 - 12855.138: 93.6815% ( 21) 00:08:21.694 12855.138 - 12905.551: 93.8391% ( 23) 00:08:21.694 12905.551 - 13006.375: 94.1612% ( 47) 00:08:21.694 13006.375 - 13107.200: 94.3462% ( 27) 00:08:21.694 13107.200 - 13208.025: 94.5107% ( 24) 00:08:21.694 13208.025 - 13308.849: 94.7848% ( 40) 00:08:21.694 13308.849 - 13409.674: 95.0041% ( 32) 00:08:21.694 13409.674 - 13510.498: 95.2371% ( 34) 00:08:21.694 13510.498 - 13611.323: 95.4427% ( 30) 00:08:21.694 13611.323 - 13712.148: 95.6620% ( 32) 00:08:21.694 13712.148 - 13812.972: 95.8607% ( 29) 00:08:21.694 13812.972 - 13913.797: 96.0184% ( 23) 00:08:21.694 13913.797 - 14014.622: 96.2034% ( 27) 00:08:21.694 14014.622 - 14115.446: 96.3816% ( 26) 00:08:21.694 14115.446 - 14216.271: 96.5461% ( 24) 00:08:21.694 14216.271 - 14317.095: 96.6900% ( 21) 00:08:21.694 14317.095 - 14417.920: 96.8133% ( 18) 00:08:21.694 14417.920 - 14518.745: 96.8956% ( 12) 00:08:21.694 14518.745 - 14619.569: 96.9846% ( 13) 00:08:21.694 14619.569 - 14720.394: 97.0669% ( 12) 00:08:21.694 14720.394 - 14821.218: 97.1423% ( 11) 00:08:21.694 14821.218 - 14922.043: 97.2108% ( 10) 00:08:21.694 14922.043 - 15022.868: 97.3342% ( 18) 00:08:21.694 15022.868 - 15123.692: 97.4095% ( 11) 00:08:21.694 15123.692 - 15224.517: 97.4986% ( 13) 00:08:21.694 15224.517 - 15325.342: 97.5877% ( 13) 00:08:21.694 15325.342 - 15426.166: 97.6905% ( 15) 00:08:21.694 15426.166 - 15526.991: 97.7933% ( 15) 00:08:21.694 15526.991 - 15627.815: 97.8755% ( 12) 00:08:21.694 15627.815 - 15728.640: 97.9715% ( 14) 00:08:21.694 15728.640 - 15829.465: 98.0332% ( 9) 00:08:21.694 15829.465 - 15930.289: 98.0743% ( 6) 00:08:21.694 15930.289 - 16031.114: 98.1154% ( 6) 00:08:21.694 16031.114 - 16131.938: 98.1634% ( 7) 00:08:21.694 16131.938 - 16232.763: 98.2045% ( 6) 00:08:21.694 16232.763 - 16333.588: 98.2456% ( 6) 00:08:21.694 16333.588 - 16434.412: 98.2525% ( 1) 00:08:21.694 16434.412 - 16535.237: 98.2936% ( 6) 00:08:21.694 16535.237 - 16636.062: 98.3758% ( 12) 00:08:21.694 16636.062 - 16736.886: 98.4718% ( 14) 00:08:21.694 16736.886 - 16837.711: 98.5540% ( 12) 00:08:21.694 16837.711 - 16938.535: 98.6431% ( 13) 00:08:21.694 16938.535 - 17039.360: 98.7322% ( 13) 00:08:21.694 17039.360 - 17140.185: 98.8213% ( 13) 00:08:21.694 17140.185 - 17241.009: 98.8898% ( 10) 00:08:21.694 17241.009 - 17341.834: 98.9515% ( 9) 00:08:21.694 17341.834 - 17442.658: 99.0200% ( 10) 00:08:21.694 17442.658 - 17543.483: 99.0680% ( 7) 00:08:21.694 17543.483 - 17644.308: 99.0954% ( 4) 00:08:21.694 17644.308 - 17745.132: 99.1160% ( 3) 00:08:21.694 17745.132 - 17845.957: 99.1228% ( 1) 00:08:21.694 24500.382 - 24601.206: 99.1502% ( 4) 00:08:21.694 24601.206 - 24702.031: 99.1913% ( 6) 00:08:21.694 24702.031 - 24802.855: 99.2325% ( 6) 00:08:21.694 24802.855 - 24903.680: 99.2804% ( 7) 00:08:21.694 24903.680 - 25004.505: 99.3215% ( 6) 00:08:21.694 25004.505 - 25105.329: 99.3695% ( 7) 00:08:21.694 25105.329 - 25206.154: 99.4175% ( 7) 00:08:21.694 25206.154 - 25306.978: 99.4655% ( 7) 00:08:21.694 25306.978 - 25407.803: 99.5066% ( 6) 00:08:21.694 25407.803 - 25508.628: 99.5546% ( 7) 00:08:21.694 25508.628 - 25609.452: 99.5614% ( 1) 00:08:21.694 29239.138 - 29440.788: 99.5820% ( 3) 00:08:21.694 29440.788 - 29642.437: 99.6642% ( 12) 00:08:21.694 29642.437 - 29844.086: 99.7601% ( 14) 00:08:21.694 29844.086 - 30045.735: 99.8492% ( 13) 00:08:21.694 30045.735 - 30247.385: 99.9383% ( 13) 00:08:21.694 30247.385 - 30449.034: 100.0000% ( 9) 00:08:21.694 00:08:21.694 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:21.694 ============================================================================== 00:08:21.694 Range in us Cumulative IO count 00:08:21.694 3680.098 - 3705.305: 0.0206% ( 3) 00:08:21.694 3705.305 - 3730.511: 0.0274% ( 1) 00:08:21.694 3730.511 - 3755.717: 0.0480% ( 3) 00:08:21.694 3755.717 - 3780.923: 0.0685% ( 3) 00:08:21.694 3780.923 - 3806.129: 0.0891% ( 3) 00:08:21.694 3806.129 - 3831.335: 0.1028% ( 2) 00:08:21.694 3831.335 - 3856.542: 0.1234% ( 3) 00:08:21.694 3856.542 - 3881.748: 0.1371% ( 2) 00:08:21.694 3881.748 - 3906.954: 0.1508% ( 2) 00:08:21.694 3906.954 - 3932.160: 0.1645% ( 2) 00:08:21.694 3932.160 - 3957.366: 0.1850% ( 3) 00:08:21.694 3957.366 - 3982.572: 0.1987% ( 2) 00:08:21.694 3982.572 - 4007.778: 0.2124% ( 2) 00:08:21.694 4007.778 - 4032.985: 0.2330% ( 3) 00:08:21.694 4032.985 - 4058.191: 0.2467% ( 2) 00:08:21.694 4058.191 - 4083.397: 0.2604% ( 2) 00:08:21.694 4083.397 - 4108.603: 0.2810% ( 3) 00:08:21.694 4108.603 - 4133.809: 0.2947% ( 2) 00:08:21.695 4133.809 - 4159.015: 0.3084% ( 2) 00:08:21.695 4159.015 - 4184.222: 0.3289% ( 3) 00:08:21.695 4184.222 - 4209.428: 0.3427% ( 2) 00:08:21.695 4209.428 - 4234.634: 0.3632% ( 3) 00:08:21.695 4234.634 - 4259.840: 0.3769% ( 2) 00:08:21.695 4259.840 - 4285.046: 0.3906% ( 2) 00:08:21.695 4285.046 - 4310.252: 0.4112% ( 3) 00:08:21.695 4310.252 - 4335.458: 0.4249% ( 2) 00:08:21.695 4335.458 - 4360.665: 0.4317% ( 1) 00:08:21.695 4360.665 - 4385.871: 0.4386% ( 1) 00:08:21.695 5167.262 - 5192.468: 0.4454% ( 1) 00:08:21.695 5192.468 - 5217.674: 0.4592% ( 2) 00:08:21.695 5217.674 - 5242.880: 0.4866% ( 4) 00:08:21.695 5242.880 - 5268.086: 0.4934% ( 1) 00:08:21.695 5268.086 - 5293.292: 0.5071% ( 2) 00:08:21.695 5293.292 - 5318.498: 0.5345% ( 4) 00:08:21.695 5318.498 - 5343.705: 0.5482% ( 2) 00:08:21.695 5343.705 - 5368.911: 0.5688% ( 3) 00:08:21.695 5368.911 - 5394.117: 0.5825% ( 2) 00:08:21.695 5394.117 - 5419.323: 0.5962% ( 2) 00:08:21.695 5419.323 - 5444.529: 0.6168% ( 3) 00:08:21.695 5444.529 - 5469.735: 0.6305% ( 2) 00:08:21.695 5469.735 - 5494.942: 0.6442% ( 2) 00:08:21.695 5494.942 - 5520.148: 0.6647% ( 3) 00:08:21.695 5520.148 - 5545.354: 0.6785% ( 2) 00:08:21.695 5545.354 - 5570.560: 0.6853% ( 1) 00:08:21.695 5570.560 - 5595.766: 0.7059% ( 3) 00:08:21.695 5595.766 - 5620.972: 0.7196% ( 2) 00:08:21.695 5620.972 - 5646.178: 0.7333% ( 2) 00:08:21.695 5646.178 - 5671.385: 0.7538% ( 3) 00:08:21.695 5671.385 - 5696.591: 0.7675% ( 2) 00:08:21.695 5696.591 - 5721.797: 0.7812% ( 2) 00:08:21.695 5721.797 - 5747.003: 0.8018% ( 3) 00:08:21.695 5747.003 - 5772.209: 0.8155% ( 2) 00:08:21.695 5772.209 - 5797.415: 0.8361% ( 3) 00:08:21.695 5797.415 - 5822.622: 0.8498% ( 2) 00:08:21.695 5822.622 - 5847.828: 0.8703% ( 3) 00:08:21.695 5847.828 - 5873.034: 0.9115% ( 6) 00:08:21.695 5873.034 - 5898.240: 0.9937% ( 12) 00:08:21.695 5898.240 - 5923.446: 1.0348% ( 6) 00:08:21.695 5923.446 - 5948.652: 1.1308% ( 14) 00:08:21.695 5948.652 - 5973.858: 1.2473% ( 17) 00:08:21.695 5973.858 - 5999.065: 1.4871% ( 35) 00:08:21.695 5999.065 - 6024.271: 1.9531% ( 68) 00:08:21.695 6024.271 - 6049.477: 2.4465% ( 72) 00:08:21.695 6049.477 - 6074.683: 3.1181% ( 98) 00:08:21.695 6074.683 - 6099.889: 3.5225% ( 59) 00:08:21.695 6099.889 - 6125.095: 4.0433% ( 76) 00:08:21.695 6125.095 - 6150.302: 5.0096% ( 141) 00:08:21.695 6150.302 - 6175.508: 6.1404% ( 165) 00:08:21.695 6175.508 - 6200.714: 7.3259% ( 173) 00:08:21.695 6200.714 - 6225.920: 8.8268% ( 219) 00:08:21.695 6225.920 - 6251.126: 10.1494% ( 193) 00:08:21.695 6251.126 - 6276.332: 11.4378% ( 188) 00:08:21.695 6276.332 - 6301.538: 12.5343% ( 160) 00:08:21.695 6301.538 - 6326.745: 13.6719% ( 166) 00:08:21.695 6326.745 - 6351.951: 14.7547% ( 158) 00:08:21.695 6351.951 - 6377.157: 15.8100% ( 154) 00:08:21.695 6377.157 - 6402.363: 16.8997% ( 159) 00:08:21.695 6402.363 - 6427.569: 18.0099% ( 162) 00:08:21.695 6427.569 - 6452.775: 19.1475% ( 166) 00:08:21.695 6452.775 - 6503.188: 21.6214% ( 361) 00:08:21.695 6503.188 - 6553.600: 24.0337% ( 352) 00:08:21.695 6553.600 - 6604.012: 26.5008% ( 360) 00:08:21.695 6604.012 - 6654.425: 29.0090% ( 366) 00:08:21.695 6654.425 - 6704.837: 31.4213% ( 352) 00:08:21.695 6704.837 - 6755.249: 33.7719% ( 343) 00:08:21.695 6755.249 - 6805.662: 35.9306% ( 315) 00:08:21.695 6805.662 - 6856.074: 37.9317% ( 292) 00:08:21.695 6856.074 - 6906.486: 39.5696% ( 239) 00:08:21.695 6906.486 - 6956.898: 40.7895% ( 178) 00:08:21.695 6956.898 - 7007.311: 41.6324% ( 123) 00:08:21.695 7007.311 - 7057.723: 42.1395% ( 74) 00:08:21.695 7057.723 - 7108.135: 42.5164% ( 55) 00:08:21.695 7108.135 - 7158.548: 42.8522% ( 49) 00:08:21.695 7158.548 - 7208.960: 43.1332% ( 41) 00:08:21.695 7208.960 - 7259.372: 43.4073% ( 40) 00:08:21.695 7259.372 - 7309.785: 43.6061% ( 29) 00:08:21.695 7309.785 - 7360.197: 43.8117% ( 30) 00:08:21.695 7360.197 - 7410.609: 44.0584% ( 36) 00:08:21.695 7410.609 - 7461.022: 44.2914% ( 34) 00:08:21.695 7461.022 - 7511.434: 44.4696% ( 26) 00:08:21.695 7511.434 - 7561.846: 44.6615% ( 28) 00:08:21.695 7561.846 - 7612.258: 44.8602% ( 29) 00:08:21.695 7612.258 - 7662.671: 45.0315% ( 25) 00:08:21.695 7662.671 - 7713.083: 45.1686% ( 20) 00:08:21.695 7713.083 - 7763.495: 45.2919% ( 18) 00:08:21.695 7763.495 - 7813.908: 45.3947% ( 15) 00:08:21.695 7813.908 - 7864.320: 45.4770% ( 12) 00:08:21.695 7864.320 - 7914.732: 45.5729% ( 14) 00:08:21.695 7914.732 - 7965.145: 45.6346% ( 9) 00:08:21.695 7965.145 - 8015.557: 45.7031% ( 10) 00:08:21.695 8015.557 - 8065.969: 45.7579% ( 8) 00:08:21.695 8065.969 - 8116.382: 45.8265% ( 10) 00:08:21.695 8116.382 - 8166.794: 45.8813% ( 8) 00:08:21.695 8166.794 - 8217.206: 45.9293% ( 7) 00:08:21.695 8217.206 - 8267.618: 45.9910% ( 9) 00:08:21.695 8267.618 - 8318.031: 46.1417% ( 22) 00:08:21.695 8318.031 - 8368.443: 46.2514% ( 16) 00:08:21.695 8368.443 - 8418.855: 46.3473% ( 14) 00:08:21.695 8418.855 - 8469.268: 46.5529% ( 30) 00:08:21.695 8469.268 - 8519.680: 46.7996% ( 36) 00:08:21.695 8519.680 - 8570.092: 47.0463% ( 36) 00:08:21.695 8570.092 - 8620.505: 47.3342% ( 42) 00:08:21.695 8620.505 - 8670.917: 47.6768% ( 50) 00:08:21.695 8670.917 - 8721.329: 48.1702% ( 72) 00:08:21.695 8721.329 - 8771.742: 48.5883% ( 61) 00:08:21.695 8771.742 - 8822.154: 49.1365% ( 80) 00:08:21.695 8822.154 - 8872.566: 49.7259% ( 86) 00:08:21.695 8872.566 - 8922.978: 50.4317% ( 103) 00:08:21.695 8922.978 - 8973.391: 51.3843% ( 139) 00:08:21.695 8973.391 - 9023.803: 52.5288% ( 167) 00:08:21.695 9023.803 - 9074.215: 53.8857% ( 198) 00:08:21.695 9074.215 - 9124.628: 55.1398% ( 183) 00:08:21.695 9124.628 - 9175.040: 56.4899% ( 197) 00:08:21.695 9175.040 - 9225.452: 57.8536% ( 199) 00:08:21.695 9225.452 - 9275.865: 59.2996% ( 211) 00:08:21.695 9275.865 - 9326.277: 60.6702% ( 200) 00:08:21.695 9326.277 - 9376.689: 62.0751% ( 205) 00:08:21.695 9376.689 - 9427.102: 63.6033% ( 223) 00:08:21.695 9427.102 - 9477.514: 65.1247% ( 222) 00:08:21.695 9477.514 - 9527.926: 66.6255% ( 219) 00:08:21.695 9527.926 - 9578.338: 68.0647% ( 210) 00:08:21.695 9578.338 - 9628.751: 69.4216% ( 198) 00:08:21.695 9628.751 - 9679.163: 70.7031% ( 187) 00:08:21.695 9679.163 - 9729.575: 71.9161% ( 177) 00:08:21.695 9729.575 - 9779.988: 73.1565% ( 181) 00:08:21.695 9779.988 - 9830.400: 74.3764% ( 178) 00:08:21.695 9830.400 - 9880.812: 75.5345% ( 169) 00:08:21.695 9880.812 - 9931.225: 76.5488% ( 148) 00:08:21.695 9931.225 - 9981.637: 77.4260% ( 128) 00:08:21.695 9981.637 - 10032.049: 78.2141% ( 115) 00:08:21.695 10032.049 - 10082.462: 78.9337% ( 105) 00:08:21.695 10082.462 - 10132.874: 79.6121% ( 99) 00:08:21.695 10132.874 - 10183.286: 80.1672% ( 81) 00:08:21.695 10183.286 - 10233.698: 80.6949% ( 77) 00:08:21.695 10233.698 - 10284.111: 81.0992% ( 59) 00:08:21.695 10284.111 - 10334.523: 81.5789% ( 70) 00:08:21.695 10334.523 - 10384.935: 82.0107% ( 63) 00:08:21.695 10384.935 - 10435.348: 82.4356% ( 62) 00:08:21.695 10435.348 - 10485.760: 82.8673% ( 63) 00:08:21.695 10485.760 - 10536.172: 83.2854% ( 61) 00:08:21.695 10536.172 - 10586.585: 83.7514% ( 68) 00:08:21.695 10586.585 - 10636.997: 84.1831% ( 63) 00:08:21.695 10636.997 - 10687.409: 84.6080% ( 62) 00:08:21.695 10687.409 - 10737.822: 84.9575% ( 51) 00:08:21.695 10737.822 - 10788.234: 85.3207% ( 53) 00:08:21.695 10788.234 - 10838.646: 85.6154% ( 43) 00:08:21.695 10838.646 - 10889.058: 85.8690% ( 37) 00:08:21.695 10889.058 - 10939.471: 86.2390% ( 54) 00:08:21.695 10939.471 - 10989.883: 86.5269% ( 42) 00:08:21.695 10989.883 - 11040.295: 86.8421% ( 46) 00:08:21.695 11040.295 - 11090.708: 87.1779% ( 49) 00:08:21.695 11090.708 - 11141.120: 87.4931% ( 46) 00:08:21.695 11141.120 - 11191.532: 87.7810% ( 42) 00:08:21.695 11191.532 - 11241.945: 88.0688% ( 42) 00:08:21.695 11241.945 - 11292.357: 88.3292% ( 38) 00:08:21.695 11292.357 - 11342.769: 88.6033% ( 40) 00:08:21.695 11342.769 - 11393.182: 88.8021% ( 29) 00:08:21.695 11393.182 - 11443.594: 88.9186% ( 17) 00:08:21.695 11443.594 - 11494.006: 89.0762% ( 23) 00:08:21.695 11494.006 - 11544.418: 89.2270% ( 22) 00:08:21.695 11544.418 - 11594.831: 89.3914% ( 24) 00:08:21.695 11594.831 - 11645.243: 89.5765% ( 27) 00:08:21.695 11645.243 - 11695.655: 89.7752% ( 29) 00:08:21.695 11695.655 - 11746.068: 89.9328% ( 23) 00:08:21.695 11746.068 - 11796.480: 90.0905% ( 23) 00:08:21.695 11796.480 - 11846.892: 90.2001% ( 16) 00:08:21.695 11846.892 - 11897.305: 90.3029% ( 15) 00:08:21.695 11897.305 - 11947.717: 90.4263% ( 18) 00:08:21.695 11947.717 - 11998.129: 90.5359% ( 16) 00:08:21.695 11998.129 - 12048.542: 90.7072% ( 25) 00:08:21.695 12048.542 - 12098.954: 90.8991% ( 28) 00:08:21.695 12098.954 - 12149.366: 91.0362% ( 20) 00:08:21.695 12149.366 - 12199.778: 91.1527% ( 17) 00:08:21.695 12199.778 - 12250.191: 91.2966% ( 21) 00:08:21.695 12250.191 - 12300.603: 91.4405% ( 21) 00:08:21.695 12300.603 - 12351.015: 91.5776% ( 20) 00:08:21.695 12351.015 - 12401.428: 91.7283% ( 22) 00:08:21.695 12401.428 - 12451.840: 91.8928% ( 24) 00:08:21.695 12451.840 - 12502.252: 92.0367% ( 21) 00:08:21.695 12502.252 - 12552.665: 92.1464% ( 16) 00:08:21.695 12552.665 - 12603.077: 92.2492% ( 15) 00:08:21.695 12603.077 - 12653.489: 92.3657% ( 17) 00:08:21.695 12653.489 - 12703.902: 92.4822% ( 17) 00:08:21.695 12703.902 - 12754.314: 92.6535% ( 25) 00:08:21.695 12754.314 - 12804.726: 92.7563% ( 15) 00:08:21.695 12804.726 - 12855.138: 92.9002% ( 21) 00:08:21.695 12855.138 - 12905.551: 93.0373% ( 20) 00:08:21.695 12905.551 - 13006.375: 93.3457% ( 45) 00:08:21.696 13006.375 - 13107.200: 93.7294% ( 56) 00:08:21.696 13107.200 - 13208.025: 94.0721% ( 50) 00:08:21.696 13208.025 - 13308.849: 94.3942% ( 47) 00:08:21.696 13308.849 - 13409.674: 94.7574% ( 53) 00:08:21.696 13409.674 - 13510.498: 95.1960% ( 64) 00:08:21.696 13510.498 - 13611.323: 95.6072% ( 60) 00:08:21.696 13611.323 - 13712.148: 95.9567% ( 51) 00:08:21.696 13712.148 - 13812.972: 96.3268% ( 54) 00:08:21.696 13812.972 - 13913.797: 96.5803% ( 37) 00:08:21.696 13913.797 - 14014.622: 96.7791% ( 29) 00:08:21.696 14014.622 - 14115.446: 96.9641% ( 27) 00:08:21.696 14115.446 - 14216.271: 97.1491% ( 27) 00:08:21.696 14216.271 - 14317.095: 97.2656% ( 17) 00:08:21.696 14317.095 - 14417.920: 97.3136% ( 7) 00:08:21.696 14417.920 - 14518.745: 97.3684% ( 8) 00:08:21.696 14518.745 - 14619.569: 97.4232% ( 8) 00:08:21.696 14619.569 - 14720.394: 97.4918% ( 10) 00:08:21.696 14720.394 - 14821.218: 97.5535% ( 9) 00:08:21.696 14821.218 - 14922.043: 97.6631% ( 16) 00:08:21.696 14922.043 - 15022.868: 97.7248% ( 9) 00:08:21.696 15022.868 - 15123.692: 97.8002% ( 11) 00:08:21.696 15123.692 - 15224.517: 97.8755% ( 11) 00:08:21.696 15224.517 - 15325.342: 97.9441% ( 10) 00:08:21.696 15325.342 - 15426.166: 98.0126% ( 10) 00:08:21.696 15426.166 - 15526.991: 98.0880% ( 11) 00:08:21.696 15526.991 - 15627.815: 98.1428% ( 8) 00:08:21.696 15627.815 - 15728.640: 98.1908% ( 7) 00:08:21.696 15728.640 - 15829.465: 98.2319% ( 6) 00:08:21.696 15829.465 - 15930.289: 98.2456% ( 2) 00:08:21.696 17039.360 - 17140.185: 98.2662% ( 3) 00:08:21.696 17140.185 - 17241.009: 98.3553% ( 13) 00:08:21.696 17241.009 - 17341.834: 98.4444% ( 13) 00:08:21.696 17341.834 - 17442.658: 98.5266% ( 12) 00:08:21.696 17442.658 - 17543.483: 98.6088% ( 12) 00:08:21.696 17543.483 - 17644.308: 98.6911% ( 12) 00:08:21.696 17644.308 - 17745.132: 98.7802% ( 13) 00:08:21.696 17745.132 - 17845.957: 98.8692% ( 13) 00:08:21.696 17845.957 - 17946.782: 98.9583% ( 13) 00:08:21.696 17946.782 - 18047.606: 99.0406% ( 12) 00:08:21.696 18047.606 - 18148.431: 99.0885% ( 7) 00:08:21.696 18148.431 - 18249.255: 99.1228% ( 5) 00:08:21.696 24097.083 - 24197.908: 99.1708% ( 7) 00:08:21.696 24197.908 - 24298.732: 99.2119% ( 6) 00:08:21.696 24298.732 - 24399.557: 99.2530% ( 6) 00:08:21.696 24399.557 - 24500.382: 99.2941% ( 6) 00:08:21.696 24500.382 - 24601.206: 99.3353% ( 6) 00:08:21.696 24601.206 - 24702.031: 99.3832% ( 7) 00:08:21.696 24702.031 - 24802.855: 99.4312% ( 7) 00:08:21.696 24802.855 - 24903.680: 99.4792% ( 7) 00:08:21.696 24903.680 - 25004.505: 99.5271% ( 7) 00:08:21.696 25004.505 - 25105.329: 99.5614% ( 5) 00:08:21.696 28835.840 - 29037.489: 99.6162% ( 8) 00:08:21.696 29037.489 - 29239.138: 99.7122% ( 14) 00:08:21.696 29239.138 - 29440.788: 99.8013% ( 13) 00:08:21.696 29440.788 - 29642.437: 99.8904% ( 13) 00:08:21.696 29642.437 - 29844.086: 99.9863% ( 14) 00:08:21.696 29844.086 - 30045.735: 100.0000% ( 2) 00:08:21.696 00:08:21.696 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:21.696 ============================================================================== 00:08:21.696 Range in us Cumulative IO count 00:08:21.696 3528.862 - 3554.068: 0.0206% ( 3) 00:08:21.696 3554.068 - 3579.274: 0.0754% ( 8) 00:08:21.696 3579.274 - 3604.480: 0.0891% ( 2) 00:08:21.696 3629.686 - 3654.892: 0.0959% ( 1) 00:08:21.696 3654.892 - 3680.098: 0.1096% ( 2) 00:08:21.696 3680.098 - 3705.305: 0.1234% ( 2) 00:08:21.696 3705.305 - 3730.511: 0.1439% ( 3) 00:08:21.696 3730.511 - 3755.717: 0.1576% ( 2) 00:08:21.696 3755.717 - 3780.923: 0.1713% ( 2) 00:08:21.696 3780.923 - 3806.129: 0.1919% ( 3) 00:08:21.696 3806.129 - 3831.335: 0.2056% ( 2) 00:08:21.696 3831.335 - 3856.542: 0.2193% ( 2) 00:08:21.696 3856.542 - 3881.748: 0.2399% ( 3) 00:08:21.696 3881.748 - 3906.954: 0.2536% ( 2) 00:08:21.696 3906.954 - 3932.160: 0.2741% ( 3) 00:08:21.696 3932.160 - 3957.366: 0.2878% ( 2) 00:08:21.696 3957.366 - 3982.572: 0.3015% ( 2) 00:08:21.696 3982.572 - 4007.778: 0.3221% ( 3) 00:08:21.696 4007.778 - 4032.985: 0.3358% ( 2) 00:08:21.696 4032.985 - 4058.191: 0.3564% ( 3) 00:08:21.696 4058.191 - 4083.397: 0.3701% ( 2) 00:08:21.696 4083.397 - 4108.603: 0.3838% ( 2) 00:08:21.696 4108.603 - 4133.809: 0.3975% ( 2) 00:08:21.696 4133.809 - 4159.015: 0.4180% ( 3) 00:08:21.696 4159.015 - 4184.222: 0.4317% ( 2) 00:08:21.696 4184.222 - 4209.428: 0.4386% ( 1) 00:08:21.696 5016.025 - 5041.231: 0.4660% ( 4) 00:08:21.696 5041.231 - 5066.437: 0.5003% ( 5) 00:08:21.696 5066.437 - 5091.643: 0.5071% ( 1) 00:08:21.696 5091.643 - 5116.849: 0.5208% ( 2) 00:08:21.696 5116.849 - 5142.055: 0.5345% ( 2) 00:08:21.696 5142.055 - 5167.262: 0.5482% ( 2) 00:08:21.696 5167.262 - 5192.468: 0.5620% ( 2) 00:08:21.696 5192.468 - 5217.674: 0.5757% ( 2) 00:08:21.696 5217.674 - 5242.880: 0.5962% ( 3) 00:08:21.696 5242.880 - 5268.086: 0.6099% ( 2) 00:08:21.696 5268.086 - 5293.292: 0.6236% ( 2) 00:08:21.696 5293.292 - 5318.498: 0.6442% ( 3) 00:08:21.696 5318.498 - 5343.705: 0.6579% ( 2) 00:08:21.696 5343.705 - 5368.911: 0.6716% ( 2) 00:08:21.696 5368.911 - 5394.117: 0.6922% ( 3) 00:08:21.696 5394.117 - 5419.323: 0.7059% ( 2) 00:08:21.696 5419.323 - 5444.529: 0.7196% ( 2) 00:08:21.696 5444.529 - 5469.735: 0.7401% ( 3) 00:08:21.696 5469.735 - 5494.942: 0.7538% ( 2) 00:08:21.696 5494.942 - 5520.148: 0.7744% ( 3) 00:08:21.696 5520.148 - 5545.354: 0.7881% ( 2) 00:08:21.696 5545.354 - 5570.560: 0.8087% ( 3) 00:08:21.696 5570.560 - 5595.766: 0.8224% ( 2) 00:08:21.696 5595.766 - 5620.972: 0.8361% ( 2) 00:08:21.696 5620.972 - 5646.178: 0.8566% ( 3) 00:08:21.696 5646.178 - 5671.385: 0.8703% ( 2) 00:08:21.696 5671.385 - 5696.591: 0.8772% ( 1) 00:08:21.696 5797.415 - 5822.622: 0.8840% ( 1) 00:08:21.696 5822.622 - 5847.828: 0.9115% ( 4) 00:08:21.696 5847.828 - 5873.034: 0.9183% ( 1) 00:08:21.696 5873.034 - 5898.240: 0.9389% ( 3) 00:08:21.696 5898.240 - 5923.446: 0.9800% ( 6) 00:08:21.696 5923.446 - 5948.652: 1.1239% ( 21) 00:08:21.696 5948.652 - 5973.858: 1.2952% ( 25) 00:08:21.696 5973.858 - 5999.065: 1.4871% ( 28) 00:08:21.696 5999.065 - 6024.271: 1.8777% ( 57) 00:08:21.696 6024.271 - 6049.477: 2.2478% ( 54) 00:08:21.696 6049.477 - 6074.683: 2.8303% ( 85) 00:08:21.696 6074.683 - 6099.889: 3.6116% ( 114) 00:08:21.696 6099.889 - 6125.095: 4.5230% ( 133) 00:08:21.696 6125.095 - 6150.302: 5.5236% ( 146) 00:08:21.696 6150.302 - 6175.508: 6.7845% ( 184) 00:08:21.696 6175.508 - 6200.714: 7.8947% ( 162) 00:08:21.696 6200.714 - 6225.920: 8.9775% ( 158) 00:08:21.696 6225.920 - 6251.126: 9.9644% ( 144) 00:08:21.696 6251.126 - 6276.332: 11.1637% ( 175) 00:08:21.696 6276.332 - 6301.538: 12.3355% ( 171) 00:08:21.696 6301.538 - 6326.745: 13.4526% ( 163) 00:08:21.696 6326.745 - 6351.951: 14.5696% ( 163) 00:08:21.696 6351.951 - 6377.157: 15.7278% ( 169) 00:08:21.696 6377.157 - 6402.363: 16.9476% ( 178) 00:08:21.696 6402.363 - 6427.569: 18.1469% ( 175) 00:08:21.696 6427.569 - 6452.775: 19.3188% ( 171) 00:08:21.696 6452.775 - 6503.188: 21.7311% ( 352) 00:08:21.696 6503.188 - 6553.600: 24.2462% ( 367) 00:08:21.696 6553.600 - 6604.012: 26.6173% ( 346) 00:08:21.696 6604.012 - 6654.425: 29.0844% ( 360) 00:08:21.696 6654.425 - 6704.837: 31.5241% ( 356) 00:08:21.696 6704.837 - 6755.249: 33.8473% ( 339) 00:08:21.696 6755.249 - 6805.662: 36.0951% ( 328) 00:08:21.696 6805.662 - 6856.074: 38.0071% ( 279) 00:08:21.696 6856.074 - 6906.486: 39.5970% ( 232) 00:08:21.696 6906.486 - 6956.898: 40.8032% ( 176) 00:08:21.696 6956.898 - 7007.311: 41.5844% ( 114) 00:08:21.696 7007.311 - 7057.723: 42.0984% ( 75) 00:08:21.696 7057.723 - 7108.135: 42.4411% ( 50) 00:08:21.696 7108.135 - 7158.548: 42.7700% ( 48) 00:08:21.696 7158.548 - 7208.960: 43.0647% ( 43) 00:08:21.696 7208.960 - 7259.372: 43.3251% ( 38) 00:08:21.696 7259.372 - 7309.785: 43.5444% ( 32) 00:08:21.696 7309.785 - 7360.197: 43.7569% ( 31) 00:08:21.696 7360.197 - 7410.609: 43.9762% ( 32) 00:08:21.696 7410.609 - 7461.022: 44.2023% ( 33) 00:08:21.696 7461.022 - 7511.434: 44.4147% ( 31) 00:08:21.696 7511.434 - 7561.846: 44.5998% ( 27) 00:08:21.696 7561.846 - 7612.258: 44.8191% ( 32) 00:08:21.696 7612.258 - 7662.671: 44.9973% ( 26) 00:08:21.696 7662.671 - 7713.083: 45.1343% ( 20) 00:08:21.696 7713.083 - 7763.495: 45.2440% ( 16) 00:08:21.696 7763.495 - 7813.908: 45.3468% ( 15) 00:08:21.696 7813.908 - 7864.320: 45.4084% ( 9) 00:08:21.696 7864.320 - 7914.732: 45.4770% ( 10) 00:08:21.696 7914.732 - 7965.145: 45.5387% ( 9) 00:08:21.696 7965.145 - 8015.557: 45.6003% ( 9) 00:08:21.696 8015.557 - 8065.969: 45.6346% ( 5) 00:08:21.696 8065.969 - 8116.382: 45.6757% ( 6) 00:08:21.696 8116.382 - 8166.794: 45.7168% ( 6) 00:08:21.696 8166.794 - 8217.206: 45.7511% ( 5) 00:08:21.696 8217.206 - 8267.618: 45.8196% ( 10) 00:08:21.696 8267.618 - 8318.031: 45.9910% ( 25) 00:08:21.696 8318.031 - 8368.443: 46.1143% ( 18) 00:08:21.696 8368.443 - 8418.855: 46.2925% ( 26) 00:08:21.696 8418.855 - 8469.268: 46.4638% ( 25) 00:08:21.696 8469.268 - 8519.680: 46.7037% ( 35) 00:08:21.696 8519.680 - 8570.092: 46.9915% ( 42) 00:08:21.696 8570.092 - 8620.505: 47.2930% ( 44) 00:08:21.696 8620.505 - 8670.917: 47.6151% ( 47) 00:08:21.696 8670.917 - 8721.329: 48.0195% ( 59) 00:08:21.696 8721.329 - 8771.742: 48.5129% ( 72) 00:08:21.696 8771.742 - 8822.154: 49.0611% ( 80) 00:08:21.696 8822.154 - 8872.566: 49.6505% ( 86) 00:08:21.696 8872.566 - 8922.978: 50.5962% ( 138) 00:08:21.696 8922.978 - 8973.391: 51.5968% ( 146) 00:08:21.697 8973.391 - 9023.803: 52.7961% ( 175) 00:08:21.697 9023.803 - 9074.215: 53.9131% ( 163) 00:08:21.697 9074.215 - 9124.628: 55.0439% ( 165) 00:08:21.697 9124.628 - 9175.040: 56.3391% ( 189) 00:08:21.697 9175.040 - 9225.452: 57.7166% ( 201) 00:08:21.697 9225.452 - 9275.865: 59.1077% ( 203) 00:08:21.697 9275.865 - 9326.277: 60.5537% ( 211) 00:08:21.697 9326.277 - 9376.689: 62.0066% ( 212) 00:08:21.697 9376.689 - 9427.102: 63.6376% ( 238) 00:08:21.697 9427.102 - 9477.514: 65.2549% ( 236) 00:08:21.697 9477.514 - 9527.926: 66.8311% ( 230) 00:08:21.697 9527.926 - 9578.338: 68.3662% ( 224) 00:08:21.697 9578.338 - 9628.751: 69.8739% ( 220) 00:08:21.697 9628.751 - 9679.163: 71.3268% ( 212) 00:08:21.697 9679.163 - 9729.575: 72.5946% ( 185) 00:08:21.697 9729.575 - 9779.988: 73.7733% ( 172) 00:08:21.697 9779.988 - 9830.400: 74.8424% ( 156) 00:08:21.697 9830.400 - 9880.812: 75.8772% ( 151) 00:08:21.697 9880.812 - 9931.225: 76.8161% ( 137) 00:08:21.697 9931.225 - 9981.637: 77.6658% ( 124) 00:08:21.697 9981.637 - 10032.049: 78.5019% ( 122) 00:08:21.697 10032.049 - 10082.462: 79.2146% ( 104) 00:08:21.697 10082.462 - 10132.874: 79.7971% ( 85) 00:08:21.697 10132.874 - 10183.286: 80.3317% ( 78) 00:08:21.697 10183.286 - 10233.698: 80.8251% ( 72) 00:08:21.697 10233.698 - 10284.111: 81.2431% ( 61) 00:08:21.697 10284.111 - 10334.523: 81.5927% ( 51) 00:08:21.697 10334.523 - 10384.935: 81.9285% ( 49) 00:08:21.697 10384.935 - 10435.348: 82.2711% ( 50) 00:08:21.697 10435.348 - 10485.760: 82.6960% ( 62) 00:08:21.697 10485.760 - 10536.172: 83.1689% ( 69) 00:08:21.697 10536.172 - 10586.585: 83.6280% ( 67) 00:08:21.697 10586.585 - 10636.997: 84.0461% ( 61) 00:08:21.697 10636.997 - 10687.409: 84.4230% ( 55) 00:08:21.697 10687.409 - 10737.822: 84.7519% ( 48) 00:08:21.697 10737.822 - 10788.234: 85.1288% ( 55) 00:08:21.697 10788.234 - 10838.646: 85.4783% ( 51) 00:08:21.697 10838.646 - 10889.058: 85.8073% ( 48) 00:08:21.697 10889.058 - 10939.471: 86.1088% ( 44) 00:08:21.697 10939.471 - 10989.883: 86.3624% ( 37) 00:08:21.697 10989.883 - 11040.295: 86.5885% ( 33) 00:08:21.697 11040.295 - 11090.708: 86.9038% ( 46) 00:08:21.697 11090.708 - 11141.120: 87.1711% ( 39) 00:08:21.697 11141.120 - 11191.532: 87.5206% ( 51) 00:08:21.697 11191.532 - 11241.945: 87.8152% ( 43) 00:08:21.697 11241.945 - 11292.357: 88.0894% ( 40) 00:08:21.697 11292.357 - 11342.769: 88.3772% ( 42) 00:08:21.697 11342.769 - 11393.182: 88.6171% ( 35) 00:08:21.697 11393.182 - 11443.594: 88.8295% ( 31) 00:08:21.697 11443.594 - 11494.006: 89.0008% ( 25) 00:08:21.697 11494.006 - 11544.418: 89.1584% ( 23) 00:08:21.697 11544.418 - 11594.831: 89.3435% ( 27) 00:08:21.697 11594.831 - 11645.243: 89.5285% ( 27) 00:08:21.697 11645.243 - 11695.655: 89.6930% ( 24) 00:08:21.697 11695.655 - 11746.068: 89.9054% ( 31) 00:08:21.697 11746.068 - 11796.480: 90.0630% ( 23) 00:08:21.697 11796.480 - 11846.892: 90.2412% ( 26) 00:08:21.697 11846.892 - 11897.305: 90.3920% ( 22) 00:08:21.697 11897.305 - 11947.717: 90.5565% ( 24) 00:08:21.697 11947.717 - 11998.129: 90.7415% ( 27) 00:08:21.697 11998.129 - 12048.542: 90.9197% ( 26) 00:08:21.697 12048.542 - 12098.954: 91.0910% ( 25) 00:08:21.697 12098.954 - 12149.366: 91.2486% ( 23) 00:08:21.697 12149.366 - 12199.778: 91.4131% ( 24) 00:08:21.697 12199.778 - 12250.191: 91.5639% ( 22) 00:08:21.697 12250.191 - 12300.603: 91.7009% ( 20) 00:08:21.697 12300.603 - 12351.015: 91.8654% ( 24) 00:08:21.697 12351.015 - 12401.428: 91.9956% ( 19) 00:08:21.697 12401.428 - 12451.840: 92.0916% ( 14) 00:08:21.697 12451.840 - 12502.252: 92.2149% ( 18) 00:08:21.697 12502.252 - 12552.665: 92.2903% ( 11) 00:08:21.697 12552.665 - 12603.077: 92.3931% ( 15) 00:08:21.697 12603.077 - 12653.489: 92.4959% ( 15) 00:08:21.697 12653.489 - 12703.902: 92.6055% ( 16) 00:08:21.697 12703.902 - 12754.314: 92.6946% ( 13) 00:08:21.697 12754.314 - 12804.726: 92.7974% ( 15) 00:08:21.697 12804.726 - 12855.138: 92.9002% ( 15) 00:08:21.697 12855.138 - 12905.551: 93.0030% ( 15) 00:08:21.697 12905.551 - 13006.375: 93.2360% ( 34) 00:08:21.697 13006.375 - 13107.200: 93.4622% ( 33) 00:08:21.697 13107.200 - 13208.025: 93.7363% ( 40) 00:08:21.697 13208.025 - 13308.849: 94.0310% ( 43) 00:08:21.697 13308.849 - 13409.674: 94.3051% ( 40) 00:08:21.697 13409.674 - 13510.498: 94.6889% ( 56) 00:08:21.697 13510.498 - 13611.323: 95.1343% ( 65) 00:08:21.697 13611.323 - 13712.148: 95.5661% ( 63) 00:08:21.697 13712.148 - 13812.972: 95.9361% ( 54) 00:08:21.697 13812.972 - 13913.797: 96.2788% ( 50) 00:08:21.697 13913.797 - 14014.622: 96.6146% ( 49) 00:08:21.697 14014.622 - 14115.446: 96.9024% ( 42) 00:08:21.697 14115.446 - 14216.271: 97.1628% ( 38) 00:08:21.697 14216.271 - 14317.095: 97.3890% ( 33) 00:08:21.697 14317.095 - 14417.920: 97.5672% ( 26) 00:08:21.697 14417.920 - 14518.745: 97.6837% ( 17) 00:08:21.697 14518.745 - 14619.569: 97.8002% ( 17) 00:08:21.697 14619.569 - 14720.394: 97.9030% ( 15) 00:08:21.697 14720.394 - 14821.218: 97.9852% ( 12) 00:08:21.697 14821.218 - 14922.043: 98.0400% ( 8) 00:08:21.697 14922.043 - 15022.868: 98.0743% ( 5) 00:08:21.697 15022.868 - 15123.692: 98.1154% ( 6) 00:08:21.697 15123.692 - 15224.517: 98.1565% ( 6) 00:08:21.697 15224.517 - 15325.342: 98.1976% ( 6) 00:08:21.697 15325.342 - 15426.166: 98.2456% ( 7) 00:08:21.697 16535.237 - 16636.062: 98.2593% ( 2) 00:08:21.697 16636.062 - 16736.886: 98.3073% ( 7) 00:08:21.697 16736.886 - 16837.711: 98.3484% ( 6) 00:08:21.697 16837.711 - 16938.535: 98.3964% ( 7) 00:08:21.697 16938.535 - 17039.360: 98.4375% ( 6) 00:08:21.697 17039.360 - 17140.185: 98.4855% ( 7) 00:08:21.697 17140.185 - 17241.009: 98.5266% ( 6) 00:08:21.697 17241.009 - 17341.834: 98.5746% ( 7) 00:08:21.697 17341.834 - 17442.658: 98.6225% ( 7) 00:08:21.697 17442.658 - 17543.483: 98.6705% ( 7) 00:08:21.697 17543.483 - 17644.308: 98.6842% ( 2) 00:08:21.697 17644.308 - 17745.132: 98.7116% ( 4) 00:08:21.697 17745.132 - 17845.957: 98.7527% ( 6) 00:08:21.697 17845.957 - 17946.782: 98.8007% ( 7) 00:08:21.697 17946.782 - 18047.606: 98.8418% ( 6) 00:08:21.697 18047.606 - 18148.431: 98.8829% ( 6) 00:08:21.697 18148.431 - 18249.255: 98.9241% ( 6) 00:08:21.697 18249.255 - 18350.080: 98.9652% ( 6) 00:08:21.697 18350.080 - 18450.905: 99.0063% ( 6) 00:08:21.697 18450.905 - 18551.729: 99.0406% ( 5) 00:08:21.697 18551.729 - 18652.554: 99.0817% ( 6) 00:08:21.697 18652.554 - 18753.378: 99.1228% ( 6) 00:08:21.697 23492.135 - 23592.960: 99.1297% ( 1) 00:08:21.697 23592.960 - 23693.785: 99.1776% ( 7) 00:08:21.697 23693.785 - 23794.609: 99.2256% ( 7) 00:08:21.697 23794.609 - 23895.434: 99.2667% ( 6) 00:08:21.697 23895.434 - 23996.258: 99.3078% ( 6) 00:08:21.697 23996.258 - 24097.083: 99.3490% ( 6) 00:08:21.697 24097.083 - 24197.908: 99.3969% ( 7) 00:08:21.697 24197.908 - 24298.732: 99.4449% ( 7) 00:08:21.697 24298.732 - 24399.557: 99.4929% ( 7) 00:08:21.697 24399.557 - 24500.382: 99.5408% ( 7) 00:08:21.697 24500.382 - 24601.206: 99.5614% ( 3) 00:08:21.697 28432.542 - 28634.191: 99.6299% ( 10) 00:08:21.697 28634.191 - 28835.840: 99.7190% ( 13) 00:08:21.697 28835.840 - 29037.489: 99.8081% ( 13) 00:08:21.697 29037.489 - 29239.138: 99.8972% ( 13) 00:08:21.697 29239.138 - 29440.788: 99.9931% ( 14) 00:08:21.697 29440.788 - 29642.437: 100.0000% ( 1) 00:08:21.697 00:08:21.697 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:21.697 ============================================================================== 00:08:21.697 Range in us Cumulative IO count 00:08:21.697 3302.006 - 3327.212: 0.0136% ( 2) 00:08:21.697 3327.212 - 3352.418: 0.0341% ( 3) 00:08:21.697 3352.418 - 3377.625: 0.0478% ( 2) 00:08:21.697 3377.625 - 3402.831: 0.0614% ( 2) 00:08:21.697 3402.831 - 3428.037: 0.0751% ( 2) 00:08:21.697 3428.037 - 3453.243: 0.0955% ( 3) 00:08:21.697 3453.243 - 3478.449: 0.1092% ( 2) 00:08:21.697 3478.449 - 3503.655: 0.1228% ( 2) 00:08:21.697 3503.655 - 3528.862: 0.1433% ( 3) 00:08:21.697 3528.862 - 3554.068: 0.1569% ( 2) 00:08:21.697 3554.068 - 3579.274: 0.1706% ( 2) 00:08:21.697 3579.274 - 3604.480: 0.1842% ( 2) 00:08:21.697 3604.480 - 3629.686: 0.1979% ( 2) 00:08:21.697 3629.686 - 3654.892: 0.2115% ( 2) 00:08:21.697 3654.892 - 3680.098: 0.2320% ( 3) 00:08:21.697 3680.098 - 3705.305: 0.2456% ( 2) 00:08:21.697 3705.305 - 3730.511: 0.2593% ( 2) 00:08:21.697 3730.511 - 3755.717: 0.2729% ( 2) 00:08:21.697 3755.717 - 3780.923: 0.2934% ( 3) 00:08:21.697 3780.923 - 3806.129: 0.3070% ( 2) 00:08:21.697 3806.129 - 3831.335: 0.3207% ( 2) 00:08:21.697 3831.335 - 3856.542: 0.3412% ( 3) 00:08:21.697 3856.542 - 3881.748: 0.3548% ( 2) 00:08:21.697 3881.748 - 3906.954: 0.3684% ( 2) 00:08:21.697 3906.954 - 3932.160: 0.3889% ( 3) 00:08:21.697 3932.160 - 3957.366: 0.4026% ( 2) 00:08:21.697 3957.366 - 3982.572: 0.4162% ( 2) 00:08:21.697 3982.572 - 4007.778: 0.4299% ( 2) 00:08:21.697 4007.778 - 4032.985: 0.4367% ( 1) 00:08:21.697 4789.169 - 4814.375: 0.4572% ( 3) 00:08:21.697 4814.375 - 4839.582: 0.4708% ( 2) 00:08:21.697 4839.582 - 4864.788: 0.4844% ( 2) 00:08:21.697 4864.788 - 4889.994: 0.4981% ( 2) 00:08:21.697 4889.994 - 4915.200: 0.5117% ( 2) 00:08:21.697 4915.200 - 4940.406: 0.5459% ( 5) 00:08:21.697 4940.406 - 4965.612: 0.5595% ( 2) 00:08:21.697 4965.612 - 4990.818: 0.5800% ( 3) 00:08:21.697 4990.818 - 5016.025: 0.5936% ( 2) 00:08:21.697 5016.025 - 5041.231: 0.6073% ( 2) 00:08:21.698 5041.231 - 5066.437: 0.6277% ( 3) 00:08:21.698 5066.437 - 5091.643: 0.6414% ( 2) 00:08:21.698 5091.643 - 5116.849: 0.6550% ( 2) 00:08:21.698 5116.849 - 5142.055: 0.6687% ( 2) 00:08:21.698 5142.055 - 5167.262: 0.6823% ( 2) 00:08:21.698 5167.262 - 5192.468: 0.7028% ( 3) 00:08:21.698 5192.468 - 5217.674: 0.7164% ( 2) 00:08:21.698 5217.674 - 5242.880: 0.7369% ( 3) 00:08:21.698 5242.880 - 5268.086: 0.7505% ( 2) 00:08:21.698 5268.086 - 5293.292: 0.7642% ( 2) 00:08:21.698 5293.292 - 5318.498: 0.7847% ( 3) 00:08:21.698 5318.498 - 5343.705: 0.7983% ( 2) 00:08:21.698 5343.705 - 5368.911: 0.8188% ( 3) 00:08:21.698 5368.911 - 5394.117: 0.8324% ( 2) 00:08:21.698 5394.117 - 5419.323: 0.8461% ( 2) 00:08:21.698 5419.323 - 5444.529: 0.8665% ( 3) 00:08:21.698 5444.529 - 5469.735: 0.8734% ( 1) 00:08:21.698 5822.622 - 5847.828: 0.9075% ( 5) 00:08:21.698 5847.828 - 5873.034: 0.9279% ( 3) 00:08:21.698 5873.034 - 5898.240: 0.9484% ( 3) 00:08:21.698 5898.240 - 5923.446: 0.9621% ( 2) 00:08:21.698 5923.446 - 5948.652: 1.0166% ( 8) 00:08:21.698 5948.652 - 5973.858: 1.2282% ( 31) 00:08:21.698 5973.858 - 5999.065: 1.5079% ( 41) 00:08:21.698 5999.065 - 6024.271: 1.9241% ( 61) 00:08:21.698 6024.271 - 6049.477: 2.5177% ( 87) 00:08:21.698 6049.477 - 6074.683: 3.0227% ( 74) 00:08:21.698 6074.683 - 6099.889: 3.7868% ( 112) 00:08:21.698 6099.889 - 6125.095: 4.4896% ( 103) 00:08:21.698 6125.095 - 6150.302: 5.4926% ( 147) 00:08:21.698 6150.302 - 6175.508: 6.5093% ( 149) 00:08:21.698 6175.508 - 6200.714: 7.7443% ( 181) 00:08:21.698 6200.714 - 6225.920: 8.9042% ( 170) 00:08:21.698 6225.920 - 6251.126: 10.0300% ( 165) 00:08:21.698 6251.126 - 6276.332: 11.1627% ( 166) 00:08:21.698 6276.332 - 6301.538: 12.4591% ( 190) 00:08:21.698 6301.538 - 6326.745: 13.6941% ( 181) 00:08:21.698 6326.745 - 6351.951: 14.7789% ( 159) 00:08:21.698 6351.951 - 6377.157: 15.9184% ( 167) 00:08:21.698 6377.157 - 6402.363: 17.1056% ( 174) 00:08:21.698 6402.363 - 6427.569: 18.2997% ( 175) 00:08:21.698 6427.569 - 6452.775: 19.4733% ( 172) 00:08:21.698 6452.775 - 6503.188: 21.8136% ( 343) 00:08:21.698 6503.188 - 6553.600: 24.2017% ( 350) 00:08:21.698 6553.600 - 6604.012: 26.6785% ( 363) 00:08:21.698 6604.012 - 6654.425: 29.1826% ( 367) 00:08:21.698 6654.425 - 6704.837: 31.5707% ( 350) 00:08:21.698 6704.837 - 6755.249: 33.8974% ( 341) 00:08:21.698 6755.249 - 6805.662: 36.1422% ( 329) 00:08:21.698 6805.662 - 6856.074: 37.9913% ( 271) 00:08:21.698 6856.074 - 6906.486: 39.4241% ( 210) 00:08:21.698 6906.486 - 6956.898: 40.7001% ( 187) 00:08:21.698 6956.898 - 7007.311: 41.4642% ( 112) 00:08:21.698 7007.311 - 7057.723: 41.9350% ( 69) 00:08:21.698 7057.723 - 7108.135: 42.2626% ( 48) 00:08:21.698 7108.135 - 7158.548: 42.6037% ( 50) 00:08:21.698 7158.548 - 7208.960: 42.8971% ( 43) 00:08:21.698 7208.960 - 7259.372: 43.1496% ( 37) 00:08:21.698 7259.372 - 7309.785: 43.3952% ( 36) 00:08:21.698 7309.785 - 7360.197: 43.5931% ( 29) 00:08:21.698 7360.197 - 7410.609: 43.7909% ( 29) 00:08:21.698 7410.609 - 7461.022: 44.0025% ( 31) 00:08:21.698 7461.022 - 7511.434: 44.1935% ( 28) 00:08:21.698 7511.434 - 7561.846: 44.4187% ( 33) 00:08:21.698 7561.846 - 7612.258: 44.6097% ( 28) 00:08:21.698 7612.258 - 7662.671: 44.7871% ( 26) 00:08:21.698 7662.671 - 7713.083: 44.9441% ( 23) 00:08:21.698 7713.083 - 7763.495: 45.0123% ( 10) 00:08:21.698 7763.495 - 7813.908: 45.0873% ( 11) 00:08:21.698 7813.908 - 7864.320: 45.1419% ( 8) 00:08:21.698 7864.320 - 7914.732: 45.1829% ( 6) 00:08:21.698 7914.732 - 7965.145: 45.2102% ( 4) 00:08:21.698 7965.145 - 8015.557: 45.2306% ( 3) 00:08:21.698 8015.557 - 8065.969: 45.2579% ( 4) 00:08:21.698 8065.969 - 8116.382: 45.2852% ( 4) 00:08:21.698 8116.382 - 8166.794: 45.3193% ( 5) 00:08:21.698 8166.794 - 8217.206: 45.3807% ( 9) 00:08:21.698 8217.206 - 8267.618: 45.5104% ( 19) 00:08:21.698 8267.618 - 8318.031: 45.6332% ( 18) 00:08:21.698 8318.031 - 8368.443: 45.8515% ( 32) 00:08:21.698 8368.443 - 8418.855: 46.0221% ( 25) 00:08:21.698 8418.855 - 8469.268: 46.2746% ( 37) 00:08:21.698 8469.268 - 8519.680: 46.5884% ( 46) 00:08:21.698 8519.680 - 8570.092: 46.8341% ( 36) 00:08:21.698 8570.092 - 8620.505: 47.2707% ( 64) 00:08:21.698 8620.505 - 8670.917: 47.6733% ( 59) 00:08:21.698 8670.917 - 8721.329: 48.0281% ( 52) 00:08:21.698 8721.329 - 8771.742: 48.5194% ( 72) 00:08:21.698 8771.742 - 8822.154: 48.9834% ( 68) 00:08:21.698 8822.154 - 8872.566: 49.7953% ( 119) 00:08:21.698 8872.566 - 8922.978: 50.6004% ( 118) 00:08:21.698 8922.978 - 8973.391: 51.6717% ( 157) 00:08:21.698 8973.391 - 9023.803: 52.8180% ( 168) 00:08:21.698 9023.803 - 9074.215: 53.8619% ( 153) 00:08:21.698 9074.215 - 9124.628: 55.0014% ( 167) 00:08:21.698 9124.628 - 9175.040: 56.3387% ( 196) 00:08:21.698 9175.040 - 9225.452: 57.6487% ( 192) 00:08:21.698 9225.452 - 9275.865: 59.0611% ( 207) 00:08:21.698 9275.865 - 9326.277: 60.5554% ( 219) 00:08:21.698 9326.277 - 9376.689: 62.1179% ( 229) 00:08:21.698 9376.689 - 9427.102: 63.7623% ( 241) 00:08:21.698 9427.102 - 9477.514: 65.4135% ( 242) 00:08:21.698 9477.514 - 9527.926: 66.8532% ( 211) 00:08:21.698 9527.926 - 9578.338: 68.4361% ( 232) 00:08:21.698 9578.338 - 9628.751: 69.9577% ( 223) 00:08:21.698 9628.751 - 9679.163: 71.3496% ( 204) 00:08:21.698 9679.163 - 9729.575: 72.7347% ( 203) 00:08:21.698 9729.575 - 9779.988: 73.9288% ( 175) 00:08:21.698 9779.988 - 9830.400: 75.0751% ( 168) 00:08:21.698 9830.400 - 9880.812: 76.0712% ( 146) 00:08:21.698 9880.812 - 9931.225: 77.0538% ( 144) 00:08:21.698 9931.225 - 9981.637: 77.9408% ( 130) 00:08:21.698 9981.637 - 10032.049: 78.7254% ( 115) 00:08:21.698 10032.049 - 10082.462: 79.4146% ( 101) 00:08:21.698 10082.462 - 10132.874: 80.0559% ( 94) 00:08:21.698 10132.874 - 10183.286: 80.6427% ( 86) 00:08:21.698 10183.286 - 10233.698: 81.1886% ( 80) 00:08:21.698 10233.698 - 10284.111: 81.6526% ( 68) 00:08:21.698 10284.111 - 10334.523: 82.0620% ( 60) 00:08:21.698 10334.523 - 10384.935: 82.4645% ( 59) 00:08:21.698 10384.935 - 10435.348: 82.8125% ( 51) 00:08:21.698 10435.348 - 10485.760: 83.1195% ( 45) 00:08:21.698 10485.760 - 10536.172: 83.4471% ( 48) 00:08:21.698 10536.172 - 10586.585: 83.8428% ( 58) 00:08:21.698 10586.585 - 10636.997: 84.1498% ( 45) 00:08:21.698 10636.997 - 10687.409: 84.5319% ( 56) 00:08:21.698 10687.409 - 10737.822: 84.8458% ( 46) 00:08:21.698 10737.822 - 10788.234: 85.1460% ( 44) 00:08:21.698 10788.234 - 10838.646: 85.4462% ( 44) 00:08:21.698 10838.646 - 10889.058: 85.7465% ( 44) 00:08:21.698 10889.058 - 10939.471: 86.1627% ( 61) 00:08:21.698 10939.471 - 10989.883: 86.4834% ( 47) 00:08:21.698 10989.883 - 11040.295: 86.7836% ( 44) 00:08:21.698 11040.295 - 11090.708: 87.0838% ( 44) 00:08:21.698 11090.708 - 11141.120: 87.3977% ( 46) 00:08:21.698 11141.120 - 11191.532: 87.6706% ( 40) 00:08:21.698 11191.532 - 11241.945: 87.9026% ( 34) 00:08:21.698 11241.945 - 11292.357: 88.1346% ( 34) 00:08:21.698 11292.357 - 11342.769: 88.3461% ( 31) 00:08:21.698 11342.769 - 11393.182: 88.5849% ( 35) 00:08:21.698 11393.182 - 11443.594: 88.8373% ( 37) 00:08:21.698 11443.594 - 11494.006: 89.0284% ( 28) 00:08:21.698 11494.006 - 11544.418: 89.2263% ( 29) 00:08:21.698 11544.418 - 11594.831: 89.3968% ( 25) 00:08:21.698 11594.831 - 11645.243: 89.5947% ( 29) 00:08:21.698 11645.243 - 11695.655: 89.7380% ( 21) 00:08:21.698 11695.655 - 11746.068: 89.8881% ( 22) 00:08:21.698 11746.068 - 11796.480: 90.0519% ( 24) 00:08:21.698 11796.480 - 11846.892: 90.2224% ( 25) 00:08:21.698 11846.892 - 11897.305: 90.4067% ( 27) 00:08:21.698 11897.305 - 11947.717: 90.5977% ( 28) 00:08:21.698 11947.717 - 11998.129: 90.7546% ( 23) 00:08:21.698 11998.129 - 12048.542: 90.9047% ( 22) 00:08:21.698 12048.542 - 12098.954: 91.0822% ( 26) 00:08:21.698 12098.954 - 12149.366: 91.2459% ( 24) 00:08:21.698 12149.366 - 12199.778: 91.4233% ( 26) 00:08:21.698 12199.778 - 12250.191: 91.5871% ( 24) 00:08:21.698 12250.191 - 12300.603: 91.7645% ( 26) 00:08:21.698 12300.603 - 12351.015: 91.8805% ( 17) 00:08:21.698 12351.015 - 12401.428: 92.0169% ( 20) 00:08:21.698 12401.428 - 12451.840: 92.1329% ( 17) 00:08:21.698 12451.840 - 12502.252: 92.2557% ( 18) 00:08:21.698 12502.252 - 12552.665: 92.4058% ( 22) 00:08:21.698 12552.665 - 12603.077: 92.5559% ( 22) 00:08:21.698 12603.077 - 12653.489: 92.6719% ( 17) 00:08:21.698 12653.489 - 12703.902: 92.8152% ( 21) 00:08:21.698 12703.902 - 12754.314: 92.9312% ( 17) 00:08:21.698 12754.314 - 12804.726: 93.0540% ( 18) 00:08:21.698 12804.726 - 12855.138: 93.1564% ( 15) 00:08:21.698 12855.138 - 12905.551: 93.2587% ( 15) 00:08:21.698 12905.551 - 13006.375: 93.4566% ( 29) 00:08:21.698 13006.375 - 13107.200: 93.6067% ( 22) 00:08:21.698 13107.200 - 13208.025: 93.8182% ( 31) 00:08:21.698 13208.025 - 13308.849: 94.0707% ( 37) 00:08:21.698 13308.849 - 13409.674: 94.2890% ( 32) 00:08:21.698 13409.674 - 13510.498: 94.5824% ( 43) 00:08:21.698 13510.498 - 13611.323: 94.8622% ( 41) 00:08:21.698 13611.323 - 13712.148: 95.1692% ( 45) 00:08:21.698 13712.148 - 13812.972: 95.5445% ( 55) 00:08:21.698 13812.972 - 13913.797: 95.8856% ( 50) 00:08:21.698 13913.797 - 14014.622: 96.2541% ( 54) 00:08:21.698 14014.622 - 14115.446: 96.6635% ( 60) 00:08:21.698 14115.446 - 14216.271: 97.0251% ( 53) 00:08:21.698 14216.271 - 14317.095: 97.3458% ( 47) 00:08:21.698 14317.095 - 14417.920: 97.6051% ( 38) 00:08:21.698 14417.920 - 14518.745: 97.8029% ( 29) 00:08:21.698 14518.745 - 14619.569: 97.9667% ( 24) 00:08:21.698 14619.569 - 14720.394: 98.0895% ( 18) 00:08:21.698 14720.394 - 14821.218: 98.2055% ( 17) 00:08:21.698 14821.218 - 14922.043: 98.2533% ( 7) 00:08:21.699 16131.938 - 16232.763: 98.2669% ( 2) 00:08:21.699 16232.763 - 16333.588: 98.3079% ( 6) 00:08:21.699 16333.588 - 16434.412: 98.3556% ( 7) 00:08:21.699 16434.412 - 16535.237: 98.3966% ( 6) 00:08:21.699 16535.237 - 16636.062: 98.4443% ( 7) 00:08:21.699 16636.062 - 16736.886: 98.4921% ( 7) 00:08:21.699 16736.886 - 16837.711: 98.5398% ( 7) 00:08:21.699 16837.711 - 16938.535: 98.5808% ( 6) 00:08:21.699 16938.535 - 17039.360: 98.6285% ( 7) 00:08:21.699 17039.360 - 17140.185: 98.6763% ( 7) 00:08:21.699 17140.185 - 17241.009: 98.6900% ( 2) 00:08:21.699 17845.957 - 17946.782: 98.7036% ( 2) 00:08:21.699 17946.782 - 18047.606: 98.7445% ( 6) 00:08:21.699 18047.606 - 18148.431: 98.7923% ( 7) 00:08:21.699 18148.431 - 18249.255: 98.8264% ( 5) 00:08:21.699 18249.255 - 18350.080: 98.8674% ( 6) 00:08:21.699 18350.080 - 18450.905: 98.9083% ( 6) 00:08:21.699 18450.905 - 18551.729: 98.9492% ( 6) 00:08:21.699 18551.729 - 18652.554: 98.9902% ( 6) 00:08:21.699 18652.554 - 18753.378: 99.0379% ( 7) 00:08:21.699 18753.378 - 18854.203: 99.0789% ( 6) 00:08:21.699 18854.203 - 18955.028: 99.1266% ( 7) 00:08:21.699 18955.028 - 19055.852: 99.1812% ( 8) 00:08:21.699 19055.852 - 19156.677: 99.2290% ( 7) 00:08:21.699 19156.677 - 19257.502: 99.2699% ( 6) 00:08:21.699 19257.502 - 19358.326: 99.3177% ( 7) 00:08:21.699 19358.326 - 19459.151: 99.3654% ( 7) 00:08:21.699 19459.151 - 19559.975: 99.4064% ( 6) 00:08:21.699 19559.975 - 19660.800: 99.4541% ( 7) 00:08:21.699 19660.800 - 19761.625: 99.4951% ( 6) 00:08:21.699 19761.625 - 19862.449: 99.5360% ( 6) 00:08:21.699 19862.449 - 19963.274: 99.5633% ( 4) 00:08:21.699 23592.960 - 23693.785: 99.5906% ( 4) 00:08:21.699 23693.785 - 23794.609: 99.6316% ( 6) 00:08:21.699 23794.609 - 23895.434: 99.6793% ( 7) 00:08:21.699 23895.434 - 23996.258: 99.7271% ( 7) 00:08:21.699 23996.258 - 24097.083: 99.7748% ( 7) 00:08:21.699 24097.083 - 24197.908: 99.8226% ( 7) 00:08:21.699 24197.908 - 24298.732: 99.8704% ( 7) 00:08:21.699 24298.732 - 24399.557: 99.9113% ( 6) 00:08:21.699 24399.557 - 24500.382: 99.9591% ( 7) 00:08:21.699 24500.382 - 24601.206: 100.0000% ( 6) 00:08:21.699 00:08:21.699 05:10:14 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:08:22.634 Initializing NVMe Controllers 00:08:22.634 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:22.634 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:22.634 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:22.634 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:22.634 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:22.634 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:22.634 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:22.634 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:22.634 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:22.634 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:22.634 Initialization complete. Launching workers. 00:08:22.634 ======================================================== 00:08:22.634 Latency(us) 00:08:22.634 Device Information : IOPS MiB/s Average min max 00:08:22.634 PCIE (0000:00:10.0) NSID 1 from core 0: 17269.69 202.38 7415.12 5213.47 22916.48 00:08:22.634 PCIE (0000:00:11.0) NSID 1 from core 0: 17269.69 202.38 7409.11 5130.98 21443.39 00:08:22.634 PCIE (0000:00:13.0) NSID 1 from core 0: 17269.69 202.38 7403.18 4613.73 21634.46 00:08:22.634 PCIE (0000:00:12.0) NSID 1 from core 0: 17269.69 202.38 7397.11 4278.76 21467.21 00:08:22.634 PCIE (0000:00:12.0) NSID 2 from core 0: 17269.69 202.38 7391.20 4016.94 21328.72 00:08:22.634 PCIE (0000:00:12.0) NSID 3 from core 0: 17269.69 202.38 7385.25 3804.62 21127.57 00:08:22.634 ======================================================== 00:08:22.634 Total : 103618.14 1214.28 7400.16 3804.62 22916.48 00:08:22.634 00:08:22.634 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:22.634 ================================================================================= 00:08:22.634 1.00000% : 5973.858us 00:08:22.634 10.00000% : 6326.745us 00:08:22.634 25.00000% : 6654.425us 00:08:22.634 50.00000% : 6956.898us 00:08:22.634 75.00000% : 7511.434us 00:08:22.635 90.00000% : 9175.040us 00:08:22.635 95.00000% : 10586.585us 00:08:22.635 98.00000% : 12653.489us 00:08:22.635 99.00000% : 14317.095us 00:08:22.635 99.50000% : 15224.517us 00:08:22.635 99.90000% : 22483.889us 00:08:22.635 99.99000% : 22887.188us 00:08:22.635 99.99900% : 22988.012us 00:08:22.635 99.99990% : 22988.012us 00:08:22.635 99.99999% : 22988.012us 00:08:22.635 00:08:22.635 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:22.635 ================================================================================= 00:08:22.635 1.00000% : 6099.889us 00:08:22.635 10.00000% : 6377.157us 00:08:22.635 25.00000% : 6654.425us 00:08:22.635 50.00000% : 7007.311us 00:08:22.635 75.00000% : 7410.609us 00:08:22.635 90.00000% : 9023.803us 00:08:22.635 95.00000% : 10586.585us 00:08:22.635 98.00000% : 12804.726us 00:08:22.635 99.00000% : 13712.148us 00:08:22.635 99.50000% : 15526.991us 00:08:22.635 99.90000% : 21374.818us 00:08:22.635 99.99000% : 21475.643us 00:08:22.635 99.99900% : 21475.643us 00:08:22.635 99.99990% : 21475.643us 00:08:22.635 99.99999% : 21475.643us 00:08:22.635 00:08:22.635 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:22.635 ================================================================================= 00:08:22.635 1.00000% : 6074.683us 00:08:22.635 10.00000% : 6402.363us 00:08:22.635 25.00000% : 6654.425us 00:08:22.635 50.00000% : 7007.311us 00:08:22.635 75.00000% : 7410.609us 00:08:22.635 90.00000% : 8771.742us 00:08:22.635 95.00000% : 10989.883us 00:08:22.635 98.00000% : 12603.077us 00:08:22.635 99.00000% : 13409.674us 00:08:22.635 99.50000% : 16031.114us 00:08:22.635 99.90000% : 21374.818us 00:08:22.635 99.99000% : 21677.292us 00:08:22.635 99.99900% : 21677.292us 00:08:22.635 99.99990% : 21677.292us 00:08:22.635 99.99999% : 21677.292us 00:08:22.635 00:08:22.635 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:22.635 ================================================================================= 00:08:22.635 1.00000% : 6099.889us 00:08:22.635 10.00000% : 6402.363us 00:08:22.635 25.00000% : 6654.425us 00:08:22.635 50.00000% : 7007.311us 00:08:22.635 75.00000% : 7410.609us 00:08:22.635 90.00000% : 8771.742us 00:08:22.635 95.00000% : 10838.646us 00:08:22.635 98.00000% : 12098.954us 00:08:22.635 99.00000% : 13913.797us 00:08:22.635 99.50000% : 15930.289us 00:08:22.635 99.90000% : 21173.169us 00:08:22.635 99.99000% : 21475.643us 00:08:22.635 99.99900% : 21475.643us 00:08:22.635 99.99990% : 21475.643us 00:08:22.635 99.99999% : 21475.643us 00:08:22.635 00:08:22.635 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:22.635 ================================================================================= 00:08:22.635 1.00000% : 6099.889us 00:08:22.635 10.00000% : 6377.157us 00:08:22.635 25.00000% : 6654.425us 00:08:22.635 50.00000% : 7007.311us 00:08:22.635 75.00000% : 7360.197us 00:08:22.635 90.00000% : 8822.154us 00:08:22.635 95.00000% : 10788.234us 00:08:22.635 98.00000% : 12401.428us 00:08:22.635 99.00000% : 14317.095us 00:08:22.635 99.50000% : 15627.815us 00:08:22.635 99.90000% : 20971.520us 00:08:22.635 99.99000% : 21273.994us 00:08:22.635 99.99900% : 21374.818us 00:08:22.635 99.99990% : 21374.818us 00:08:22.635 99.99999% : 21374.818us 00:08:22.635 00:08:22.635 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:22.635 ================================================================================= 00:08:22.635 1.00000% : 5999.065us 00:08:22.635 10.00000% : 6377.157us 00:08:22.635 25.00000% : 6654.425us 00:08:22.635 50.00000% : 7007.311us 00:08:22.635 75.00000% : 7410.609us 00:08:22.635 90.00000% : 9023.803us 00:08:22.635 95.00000% : 10636.997us 00:08:22.635 98.00000% : 12703.902us 00:08:22.635 99.00000% : 14317.095us 00:08:22.635 99.50000% : 15526.991us 00:08:22.635 99.90000% : 20769.871us 00:08:22.635 99.99000% : 21173.169us 00:08:22.635 99.99900% : 21173.169us 00:08:22.635 99.99990% : 21173.169us 00:08:22.635 99.99999% : 21173.169us 00:08:22.635 00:08:22.635 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:22.635 ============================================================================== 00:08:22.635 Range in us Cumulative IO count 00:08:22.635 5192.468 - 5217.674: 0.0058% ( 1) 00:08:22.635 5217.674 - 5242.880: 0.0116% ( 1) 00:08:22.635 5242.880 - 5268.086: 0.0231% ( 2) 00:08:22.635 5268.086 - 5293.292: 0.0347% ( 2) 00:08:22.635 5293.292 - 5318.498: 0.0405% ( 1) 00:08:22.635 5318.498 - 5343.705: 0.0463% ( 1) 00:08:22.635 5343.705 - 5368.911: 0.0579% ( 2) 00:08:22.635 5368.911 - 5394.117: 0.0637% ( 1) 00:08:22.635 5394.117 - 5419.323: 0.0752% ( 2) 00:08:22.635 5419.323 - 5444.529: 0.0868% ( 2) 00:08:22.635 5444.529 - 5469.735: 0.1100% ( 4) 00:08:22.635 5469.735 - 5494.942: 0.1273% ( 3) 00:08:22.635 5494.942 - 5520.148: 0.1389% ( 2) 00:08:22.635 5520.148 - 5545.354: 0.1447% ( 1) 00:08:22.635 5545.354 - 5570.560: 0.1505% ( 1) 00:08:22.635 5570.560 - 5595.766: 0.1620% ( 2) 00:08:22.635 5595.766 - 5620.972: 0.1794% ( 3) 00:08:22.635 5620.972 - 5646.178: 0.1852% ( 1) 00:08:22.635 5646.178 - 5671.385: 0.2025% ( 3) 00:08:22.635 5671.385 - 5696.591: 0.2199% ( 3) 00:08:22.635 5696.591 - 5721.797: 0.2315% ( 2) 00:08:22.635 5721.797 - 5747.003: 0.2488% ( 3) 00:08:22.635 5747.003 - 5772.209: 0.3067% ( 10) 00:08:22.635 5772.209 - 5797.415: 0.3877% ( 14) 00:08:22.635 5797.415 - 5822.622: 0.4225% ( 6) 00:08:22.635 5822.622 - 5847.828: 0.4572% ( 6) 00:08:22.635 5847.828 - 5873.034: 0.5035% ( 8) 00:08:22.635 5873.034 - 5898.240: 0.6539% ( 26) 00:08:22.635 5898.240 - 5923.446: 0.7639% ( 19) 00:08:22.635 5923.446 - 5948.652: 0.9664% ( 35) 00:08:22.635 5948.652 - 5973.858: 1.0938% ( 22) 00:08:22.635 5973.858 - 5999.065: 1.2095% ( 20) 00:08:22.635 5999.065 - 6024.271: 1.4236% ( 37) 00:08:22.635 6024.271 - 6049.477: 1.6088% ( 32) 00:08:22.635 6049.477 - 6074.683: 1.8924% ( 49) 00:08:22.635 6074.683 - 6099.889: 2.2685% ( 65) 00:08:22.635 6099.889 - 6125.095: 2.7083% ( 76) 00:08:22.635 6125.095 - 6150.302: 3.2928% ( 101) 00:08:22.635 6150.302 - 6175.508: 3.8773% ( 101) 00:08:22.635 6175.508 - 6200.714: 4.5660% ( 119) 00:08:22.635 6200.714 - 6225.920: 5.2720% ( 122) 00:08:22.635 6225.920 - 6251.126: 6.3947% ( 194) 00:08:22.635 6251.126 - 6276.332: 7.4132% ( 176) 00:08:22.635 6276.332 - 6301.538: 8.7037% ( 223) 00:08:22.635 6301.538 - 6326.745: 10.0116% ( 226) 00:08:22.635 6326.745 - 6351.951: 11.3310% ( 228) 00:08:22.635 6351.951 - 6377.157: 12.6331% ( 225) 00:08:22.635 6377.157 - 6402.363: 14.0046% ( 237) 00:08:22.635 6402.363 - 6427.569: 15.3819% ( 238) 00:08:22.635 6427.569 - 6452.775: 16.6319% ( 216) 00:08:22.635 6452.775 - 6503.188: 19.1088% ( 428) 00:08:22.635 6503.188 - 6553.600: 21.9155% ( 485) 00:08:22.635 6553.600 - 6604.012: 24.7801% ( 495) 00:08:22.635 6604.012 - 6654.425: 27.4421% ( 460) 00:08:22.635 6654.425 - 6704.837: 30.6308% ( 551) 00:08:22.635 6704.837 - 6755.249: 34.2072% ( 618) 00:08:22.635 6755.249 - 6805.662: 38.0382% ( 662) 00:08:22.635 6805.662 - 6856.074: 42.0602% ( 695) 00:08:22.635 6856.074 - 6906.486: 46.2789% ( 729) 00:08:22.635 6906.486 - 6956.898: 50.1100% ( 662) 00:08:22.635 6956.898 - 7007.311: 53.5185% ( 589) 00:08:22.635 7007.311 - 7057.723: 56.8461% ( 575) 00:08:22.635 7057.723 - 7108.135: 59.4907% ( 457) 00:08:22.635 7108.135 - 7158.548: 62.0602% ( 444) 00:08:22.635 7158.548 - 7208.960: 64.5833% ( 436) 00:08:22.635 7208.960 - 7259.372: 66.8403% ( 390) 00:08:22.635 7259.372 - 7309.785: 69.2766% ( 421) 00:08:22.635 7309.785 - 7360.197: 71.1921% ( 331) 00:08:22.635 7360.197 - 7410.609: 72.9340% ( 301) 00:08:22.635 7410.609 - 7461.022: 74.6586% ( 298) 00:08:22.635 7461.022 - 7511.434: 76.2847% ( 281) 00:08:22.635 7511.434 - 7561.846: 77.5579% ( 220) 00:08:22.635 7561.846 - 7612.258: 78.7384% ( 204) 00:08:22.635 7612.258 - 7662.671: 80.0868% ( 233) 00:08:22.635 7662.671 - 7713.083: 81.4178% ( 230) 00:08:22.635 7713.083 - 7763.495: 82.3495% ( 161) 00:08:22.635 7763.495 - 7813.908: 83.0556% ( 122) 00:08:22.635 7813.908 - 7864.320: 83.5301% ( 82) 00:08:22.635 7864.320 - 7914.732: 84.2072% ( 117) 00:08:22.635 7914.732 - 7965.145: 84.8727% ( 115) 00:08:22.635 7965.145 - 8015.557: 85.4167% ( 94) 00:08:22.635 8015.557 - 8065.969: 85.8218% ( 70) 00:08:22.635 8065.969 - 8116.382: 86.2211% ( 69) 00:08:22.635 8116.382 - 8166.794: 86.5625% ( 59) 00:08:22.635 8166.794 - 8217.206: 86.8287% ( 46) 00:08:22.635 8217.206 - 8267.618: 86.9850% ( 27) 00:08:22.635 8267.618 - 8318.031: 87.1644% ( 31) 00:08:22.635 8318.031 - 8368.443: 87.3843% ( 38) 00:08:22.635 8368.443 - 8418.855: 87.6157% ( 40) 00:08:22.635 8418.855 - 8469.268: 87.9803% ( 63) 00:08:22.635 8469.268 - 8519.680: 88.2755% ( 51) 00:08:22.635 8519.680 - 8570.092: 88.4780% ( 35) 00:08:22.635 8570.092 - 8620.505: 88.6111% ( 23) 00:08:22.635 8620.505 - 8670.917: 88.7442% ( 23) 00:08:22.635 8670.917 - 8721.329: 88.8657% ( 21) 00:08:22.635 8721.329 - 8771.742: 89.0567% ( 33) 00:08:22.635 8771.742 - 8822.154: 89.2072% ( 26) 00:08:22.635 8822.154 - 8872.566: 89.3113% ( 18) 00:08:22.635 8872.566 - 8922.978: 89.4039% ( 16) 00:08:22.635 8922.978 - 8973.391: 89.4907% ( 15) 00:08:22.635 8973.391 - 9023.803: 89.5660% ( 13) 00:08:22.635 9023.803 - 9074.215: 89.6875% ( 21) 00:08:22.635 9074.215 - 9124.628: 89.8843% ( 34) 00:08:22.635 9124.628 - 9175.040: 90.0984% ( 37) 00:08:22.635 9175.040 - 9225.452: 90.2836% ( 32) 00:08:22.635 9225.452 - 9275.865: 90.4340% ( 26) 00:08:22.635 9275.865 - 9326.277: 90.5903% ( 27) 00:08:22.636 9326.277 - 9376.689: 90.7234% ( 23) 00:08:22.636 9376.689 - 9427.102: 90.8854% ( 28) 00:08:22.636 9427.102 - 9477.514: 91.0301% ( 25) 00:08:22.636 9477.514 - 9527.926: 91.3368% ( 53) 00:08:22.636 9527.926 - 9578.338: 91.6667% ( 57) 00:08:22.636 9578.338 - 9628.751: 91.8924% ( 39) 00:08:22.636 9628.751 - 9679.163: 92.0891% ( 34) 00:08:22.636 9679.163 - 9729.575: 92.3438% ( 44) 00:08:22.636 9729.575 - 9779.988: 92.5579% ( 37) 00:08:22.636 9779.988 - 9830.400: 92.7662% ( 36) 00:08:22.636 9830.400 - 9880.812: 92.9282% ( 28) 00:08:22.636 9880.812 - 9931.225: 93.0613% ( 23) 00:08:22.636 9931.225 - 9981.637: 93.2812% ( 38) 00:08:22.636 9981.637 - 10032.049: 93.4375% ( 27) 00:08:22.636 10032.049 - 10082.462: 93.6169% ( 31) 00:08:22.636 10082.462 - 10132.874: 93.7674% ( 26) 00:08:22.636 10132.874 - 10183.286: 93.9352% ( 29) 00:08:22.636 10183.286 - 10233.698: 94.1030% ( 29) 00:08:22.636 10233.698 - 10284.111: 94.2650% ( 28) 00:08:22.636 10284.111 - 10334.523: 94.3808% ( 20) 00:08:22.636 10334.523 - 10384.935: 94.5081% ( 22) 00:08:22.636 10384.935 - 10435.348: 94.6470% ( 24) 00:08:22.636 10435.348 - 10485.760: 94.7338% ( 15) 00:08:22.636 10485.760 - 10536.172: 94.8900% ( 27) 00:08:22.636 10536.172 - 10586.585: 95.0289% ( 24) 00:08:22.636 10586.585 - 10636.997: 95.1273% ( 17) 00:08:22.636 10636.997 - 10687.409: 95.1794% ( 9) 00:08:22.636 10687.409 - 10737.822: 95.2894% ( 19) 00:08:22.636 10737.822 - 10788.234: 95.3819% ( 16) 00:08:22.636 10788.234 - 10838.646: 95.4861% ( 18) 00:08:22.636 10838.646 - 10889.058: 95.5150% ( 5) 00:08:22.636 10889.058 - 10939.471: 95.5671% ( 9) 00:08:22.636 10939.471 - 10989.883: 95.6655% ( 17) 00:08:22.636 10989.883 - 11040.295: 95.7176% ( 9) 00:08:22.636 11040.295 - 11090.708: 95.7755% ( 10) 00:08:22.636 11090.708 - 11141.120: 95.8102% ( 6) 00:08:22.636 11141.120 - 11191.532: 95.8507% ( 7) 00:08:22.636 11191.532 - 11241.945: 95.9028% ( 9) 00:08:22.636 11241.945 - 11292.357: 95.9549% ( 9) 00:08:22.636 11292.357 - 11342.769: 96.0185% ( 11) 00:08:22.636 11342.769 - 11393.182: 96.1285% ( 19) 00:08:22.636 11393.182 - 11443.594: 96.1690% ( 7) 00:08:22.636 11443.594 - 11494.006: 96.2037% ( 6) 00:08:22.636 11494.006 - 11544.418: 96.2616% ( 10) 00:08:22.636 11544.418 - 11594.831: 96.3137% ( 9) 00:08:22.636 11594.831 - 11645.243: 96.3657% ( 9) 00:08:22.636 11645.243 - 11695.655: 96.4583% ( 16) 00:08:22.636 11695.655 - 11746.068: 96.5509% ( 16) 00:08:22.636 11746.068 - 11796.480: 96.6782% ( 22) 00:08:22.636 11796.480 - 11846.892: 96.7824% ( 18) 00:08:22.636 11846.892 - 11897.305: 96.8750% ( 16) 00:08:22.636 11897.305 - 11947.717: 96.9850% ( 19) 00:08:22.636 11947.717 - 11998.129: 97.0544% ( 12) 00:08:22.636 11998.129 - 12048.542: 97.1412% ( 15) 00:08:22.636 12048.542 - 12098.954: 97.2164% ( 13) 00:08:22.636 12098.954 - 12149.366: 97.2801% ( 11) 00:08:22.636 12149.366 - 12199.778: 97.3611% ( 14) 00:08:22.636 12199.778 - 12250.191: 97.4306% ( 12) 00:08:22.636 12250.191 - 12300.603: 97.5174% ( 15) 00:08:22.636 12300.603 - 12351.015: 97.6157% ( 17) 00:08:22.636 12351.015 - 12401.428: 97.6678% ( 9) 00:08:22.636 12401.428 - 12451.840: 97.7373% ( 12) 00:08:22.636 12451.840 - 12502.252: 97.8356% ( 17) 00:08:22.636 12502.252 - 12552.665: 97.9051% ( 12) 00:08:22.636 12552.665 - 12603.077: 97.9688% ( 11) 00:08:22.636 12603.077 - 12653.489: 98.0324% ( 11) 00:08:22.636 12653.489 - 12703.902: 98.0845% ( 9) 00:08:22.636 12703.902 - 12754.314: 98.1308% ( 8) 00:08:22.636 12754.314 - 12804.726: 98.1829% ( 9) 00:08:22.636 12804.726 - 12855.138: 98.2350% ( 9) 00:08:22.636 12855.138 - 12905.551: 98.3333% ( 17) 00:08:22.636 12905.551 - 13006.375: 98.5243% ( 33) 00:08:22.636 13006.375 - 13107.200: 98.6111% ( 15) 00:08:22.636 13107.200 - 13208.025: 98.6690% ( 10) 00:08:22.636 13208.025 - 13308.849: 98.7326% ( 11) 00:08:22.636 13308.849 - 13409.674: 98.7789% ( 8) 00:08:22.636 13409.674 - 13510.498: 98.8368% ( 10) 00:08:22.636 13510.498 - 13611.323: 98.8773% ( 7) 00:08:22.636 13611.323 - 13712.148: 98.8889% ( 2) 00:08:22.636 13913.797 - 14014.622: 98.8947% ( 1) 00:08:22.636 14115.446 - 14216.271: 98.9178% ( 4) 00:08:22.636 14216.271 - 14317.095: 99.0856% ( 29) 00:08:22.636 14317.095 - 14417.920: 99.1493% ( 11) 00:08:22.636 14417.920 - 14518.745: 99.2188% ( 12) 00:08:22.636 14518.745 - 14619.569: 99.2824% ( 11) 00:08:22.636 14619.569 - 14720.394: 99.3634% ( 14) 00:08:22.636 14720.394 - 14821.218: 99.3924% ( 5) 00:08:22.636 14821.218 - 14922.043: 99.4444% ( 9) 00:08:22.636 14922.043 - 15022.868: 99.4676% ( 4) 00:08:22.636 15022.868 - 15123.692: 99.4792% ( 2) 00:08:22.636 15123.692 - 15224.517: 99.5139% ( 6) 00:08:22.636 15224.517 - 15325.342: 99.5486% ( 6) 00:08:22.636 15325.342 - 15426.166: 99.5775% ( 5) 00:08:22.636 15426.166 - 15526.991: 99.6181% ( 7) 00:08:22.636 15526.991 - 15627.815: 99.6296% ( 2) 00:08:22.636 21173.169 - 21273.994: 99.6412% ( 2) 00:08:22.636 21273.994 - 21374.818: 99.6586% ( 3) 00:08:22.636 21374.818 - 21475.643: 99.6875% ( 5) 00:08:22.636 21475.643 - 21576.468: 99.7049% ( 3) 00:08:22.636 21576.468 - 21677.292: 99.7280% ( 4) 00:08:22.636 21677.292 - 21778.117: 99.7512% ( 4) 00:08:22.636 21778.117 - 21878.942: 99.7743% ( 4) 00:08:22.636 21878.942 - 21979.766: 99.7975% ( 4) 00:08:22.636 21979.766 - 22080.591: 99.8148% ( 3) 00:08:22.636 22080.591 - 22181.415: 99.8380% ( 4) 00:08:22.636 22181.415 - 22282.240: 99.8611% ( 4) 00:08:22.636 22282.240 - 22383.065: 99.8843% ( 4) 00:08:22.636 22383.065 - 22483.889: 99.9074% ( 4) 00:08:22.636 22483.889 - 22584.714: 99.9306% ( 4) 00:08:22.636 22584.714 - 22685.538: 99.9537% ( 4) 00:08:22.636 22685.538 - 22786.363: 99.9769% ( 4) 00:08:22.636 22786.363 - 22887.188: 99.9942% ( 3) 00:08:22.636 22887.188 - 22988.012: 100.0000% ( 1) 00:08:22.636 00:08:22.636 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:22.636 ============================================================================== 00:08:22.636 Range in us Cumulative IO count 00:08:22.636 5116.849 - 5142.055: 0.0058% ( 1) 00:08:22.636 5142.055 - 5167.262: 0.0174% ( 2) 00:08:22.636 5167.262 - 5192.468: 0.0810% ( 11) 00:08:22.636 5192.468 - 5217.674: 0.1157% ( 6) 00:08:22.636 5217.674 - 5242.880: 0.1678% ( 9) 00:08:22.636 5242.880 - 5268.086: 0.2199% ( 9) 00:08:22.636 5268.086 - 5293.292: 0.2836% ( 11) 00:08:22.636 5293.292 - 5318.498: 0.2951% ( 2) 00:08:22.636 5318.498 - 5343.705: 0.3067% ( 2) 00:08:22.636 5343.705 - 5368.911: 0.3183% ( 2) 00:08:22.636 5368.911 - 5394.117: 0.3356% ( 3) 00:08:22.636 5394.117 - 5419.323: 0.3472% ( 2) 00:08:22.636 5419.323 - 5444.529: 0.3588% ( 2) 00:08:22.636 5444.529 - 5469.735: 0.3704% ( 2) 00:08:22.636 5822.622 - 5847.828: 0.3819% ( 2) 00:08:22.636 5847.828 - 5873.034: 0.3993% ( 3) 00:08:22.636 5873.034 - 5898.240: 0.4051% ( 1) 00:08:22.636 5898.240 - 5923.446: 0.4167% ( 2) 00:08:22.636 5923.446 - 5948.652: 0.4340% ( 3) 00:08:22.636 5948.652 - 5973.858: 0.4630% ( 5) 00:08:22.636 5973.858 - 5999.065: 0.5035% ( 7) 00:08:22.636 5999.065 - 6024.271: 0.5613% ( 10) 00:08:22.636 6024.271 - 6049.477: 0.7350% ( 30) 00:08:22.636 6049.477 - 6074.683: 0.8681% ( 23) 00:08:22.636 6074.683 - 6099.889: 1.0127% ( 25) 00:08:22.636 6099.889 - 6125.095: 1.2616% ( 43) 00:08:22.636 6125.095 - 6150.302: 1.8113% ( 95) 00:08:22.636 6150.302 - 6175.508: 2.4479% ( 110) 00:08:22.636 6175.508 - 6200.714: 3.1076% ( 114) 00:08:22.636 6200.714 - 6225.920: 3.7384% ( 109) 00:08:22.636 6225.920 - 6251.126: 4.6123% ( 151) 00:08:22.636 6251.126 - 6276.332: 6.0127% ( 242) 00:08:22.636 6276.332 - 6301.538: 7.1181% ( 191) 00:08:22.636 6301.538 - 6326.745: 8.3970% ( 221) 00:08:22.636 6326.745 - 6351.951: 9.6991% ( 225) 00:08:22.636 6351.951 - 6377.157: 10.6887% ( 171) 00:08:22.636 6377.157 - 6402.363: 12.2627% ( 272) 00:08:22.636 6402.363 - 6427.569: 13.3391% ( 186) 00:08:22.636 6427.569 - 6452.775: 14.2708% ( 161) 00:08:22.636 6452.775 - 6503.188: 17.5347% ( 564) 00:08:22.636 6503.188 - 6553.600: 20.9086% ( 583) 00:08:22.636 6553.600 - 6604.012: 23.4549% ( 440) 00:08:22.636 6604.012 - 6654.425: 26.3021% ( 492) 00:08:22.636 6654.425 - 6704.837: 29.1030% ( 484) 00:08:22.636 6704.837 - 6755.249: 32.0949% ( 517) 00:08:22.636 6755.249 - 6805.662: 35.1389% ( 526) 00:08:22.636 6805.662 - 6856.074: 38.7153% ( 618) 00:08:22.636 6856.074 - 6906.486: 42.8125% ( 708) 00:08:22.636 6906.486 - 6956.898: 47.1701% ( 753) 00:08:22.636 6956.898 - 7007.311: 52.3843% ( 901) 00:08:22.636 7007.311 - 7057.723: 56.5625% ( 722) 00:08:22.636 7057.723 - 7108.135: 61.1053% ( 785) 00:08:22.636 7108.135 - 7158.548: 64.5486% ( 595) 00:08:22.636 7158.548 - 7208.960: 67.8241% ( 566) 00:08:22.636 7208.960 - 7259.372: 70.4398% ( 452) 00:08:22.636 7259.372 - 7309.785: 72.4653% ( 350) 00:08:22.636 7309.785 - 7360.197: 74.7569% ( 396) 00:08:22.636 7360.197 - 7410.609: 76.2211% ( 253) 00:08:22.636 7410.609 - 7461.022: 77.6389% ( 245) 00:08:22.636 7461.022 - 7511.434: 78.3681% ( 126) 00:08:22.636 7511.434 - 7561.846: 79.0451% ( 117) 00:08:22.636 7561.846 - 7612.258: 79.8495% ( 139) 00:08:22.636 7612.258 - 7662.671: 80.4109% ( 97) 00:08:22.636 7662.671 - 7713.083: 81.5567% ( 198) 00:08:22.636 7713.083 - 7763.495: 82.3206% ( 132) 00:08:22.636 7763.495 - 7813.908: 82.8241% ( 87) 00:08:22.636 7813.908 - 7864.320: 83.5648% ( 128) 00:08:22.636 7864.320 - 7914.732: 84.4734% ( 157) 00:08:22.636 7914.732 - 7965.145: 85.0231% ( 95) 00:08:22.636 7965.145 - 8015.557: 85.5613% ( 93) 00:08:22.636 8015.557 - 8065.969: 85.8333% ( 47) 00:08:22.637 8065.969 - 8116.382: 86.0822% ( 43) 00:08:22.637 8116.382 - 8166.794: 86.4120% ( 57) 00:08:22.637 8166.794 - 8217.206: 86.9444% ( 92) 00:08:22.637 8217.206 - 8267.618: 87.3090% ( 63) 00:08:22.637 8267.618 - 8318.031: 87.5347% ( 39) 00:08:22.637 8318.031 - 8368.443: 87.6910% ( 27) 00:08:22.637 8368.443 - 8418.855: 87.8241% ( 23) 00:08:22.637 8418.855 - 8469.268: 87.9688% ( 25) 00:08:22.637 8469.268 - 8519.680: 88.1655% ( 34) 00:08:22.637 8519.680 - 8570.092: 88.5127% ( 60) 00:08:22.637 8570.092 - 8620.505: 88.7326% ( 38) 00:08:22.637 8620.505 - 8670.917: 88.8947% ( 28) 00:08:22.637 8670.917 - 8721.329: 89.0220% ( 22) 00:08:22.637 8721.329 - 8771.742: 89.1319% ( 19) 00:08:22.637 8771.742 - 8822.154: 89.3345% ( 35) 00:08:22.637 8822.154 - 8872.566: 89.6296% ( 51) 00:08:22.637 8872.566 - 8922.978: 89.7454% ( 20) 00:08:22.637 8922.978 - 8973.391: 89.9132% ( 29) 00:08:22.637 8973.391 - 9023.803: 90.1100% ( 34) 00:08:22.637 9023.803 - 9074.215: 90.2604% ( 26) 00:08:22.637 9074.215 - 9124.628: 90.4167% ( 27) 00:08:22.637 9124.628 - 9175.040: 90.5903% ( 30) 00:08:22.637 9175.040 - 9225.452: 90.7407% ( 26) 00:08:22.637 9225.452 - 9275.865: 90.8970% ( 27) 00:08:22.637 9275.865 - 9326.277: 91.0938% ( 34) 00:08:22.637 9326.277 - 9376.689: 91.2847% ( 33) 00:08:22.637 9376.689 - 9427.102: 91.6493% ( 63) 00:08:22.637 9427.102 - 9477.514: 91.8113% ( 28) 00:08:22.637 9477.514 - 9527.926: 91.9676% ( 27) 00:08:22.637 9527.926 - 9578.338: 92.0949% ( 22) 00:08:22.637 9578.338 - 9628.751: 92.2859% ( 33) 00:08:22.637 9628.751 - 9679.163: 92.4595% ( 30) 00:08:22.637 9679.163 - 9729.575: 92.6042% ( 25) 00:08:22.637 9729.575 - 9779.988: 92.9572% ( 61) 00:08:22.637 9779.988 - 9830.400: 93.1076% ( 26) 00:08:22.637 9830.400 - 9880.812: 93.2523% ( 25) 00:08:22.637 9880.812 - 9931.225: 93.4201% ( 29) 00:08:22.637 9931.225 - 9981.637: 93.5648% ( 25) 00:08:22.637 9981.637 - 10032.049: 93.7558% ( 33) 00:08:22.637 10032.049 - 10082.462: 93.8715% ( 20) 00:08:22.637 10082.462 - 10132.874: 94.0162% ( 25) 00:08:22.637 10132.874 - 10183.286: 94.1551% ( 24) 00:08:22.637 10183.286 - 10233.698: 94.2650% ( 19) 00:08:22.637 10233.698 - 10284.111: 94.4271% ( 28) 00:08:22.637 10284.111 - 10334.523: 94.5255% ( 17) 00:08:22.637 10334.523 - 10384.935: 94.6354% ( 19) 00:08:22.637 10384.935 - 10435.348: 94.7512% ( 20) 00:08:22.637 10435.348 - 10485.760: 94.8611% ( 19) 00:08:22.637 10485.760 - 10536.172: 94.9826% ( 21) 00:08:22.637 10536.172 - 10586.585: 95.1100% ( 22) 00:08:22.637 10586.585 - 10636.997: 95.1447% ( 6) 00:08:22.637 10636.997 - 10687.409: 95.1852% ( 7) 00:08:22.637 10687.409 - 10737.822: 95.2141% ( 5) 00:08:22.637 10737.822 - 10788.234: 95.2315% ( 3) 00:08:22.637 10788.234 - 10838.646: 95.2604% ( 5) 00:08:22.637 10838.646 - 10889.058: 95.3241% ( 11) 00:08:22.637 10889.058 - 10939.471: 95.4167% ( 16) 00:08:22.637 10939.471 - 10989.883: 95.5266% ( 19) 00:08:22.637 10989.883 - 11040.295: 95.6250% ( 17) 00:08:22.637 11040.295 - 11090.708: 95.8218% ( 34) 00:08:22.637 11090.708 - 11141.120: 95.8623% ( 7) 00:08:22.637 11141.120 - 11191.532: 95.9086% ( 8) 00:08:22.637 11191.532 - 11241.945: 95.9549% ( 8) 00:08:22.637 11241.945 - 11292.357: 96.0069% ( 9) 00:08:22.637 11292.357 - 11342.769: 96.0532% ( 8) 00:08:22.637 11342.769 - 11393.182: 96.0822% ( 5) 00:08:22.637 11393.182 - 11443.594: 96.1053% ( 4) 00:08:22.637 11443.594 - 11494.006: 96.1227% ( 3) 00:08:22.637 11494.006 - 11544.418: 96.1458% ( 4) 00:08:22.637 11544.418 - 11594.831: 96.1748% ( 5) 00:08:22.637 11594.831 - 11645.243: 96.2153% ( 7) 00:08:22.637 11645.243 - 11695.655: 96.2500% ( 6) 00:08:22.637 11695.655 - 11746.068: 96.3368% ( 15) 00:08:22.637 11746.068 - 11796.480: 96.4294% ( 16) 00:08:22.637 11796.480 - 11846.892: 96.6030% ( 30) 00:08:22.637 11846.892 - 11897.305: 96.8113% ( 36) 00:08:22.637 11897.305 - 11947.717: 96.9329% ( 21) 00:08:22.637 11947.717 - 11998.129: 97.0312% ( 17) 00:08:22.637 11998.129 - 12048.542: 97.1354% ( 18) 00:08:22.637 12048.542 - 12098.954: 97.2164% ( 14) 00:08:22.637 12098.954 - 12149.366: 97.3148% ( 17) 00:08:22.637 12149.366 - 12199.778: 97.4190% ( 18) 00:08:22.637 12199.778 - 12250.191: 97.5231% ( 18) 00:08:22.637 12250.191 - 12300.603: 97.5926% ( 12) 00:08:22.637 12300.603 - 12351.015: 97.6562% ( 11) 00:08:22.637 12351.015 - 12401.428: 97.7083% ( 9) 00:08:22.637 12401.428 - 12451.840: 97.7546% ( 8) 00:08:22.637 12451.840 - 12502.252: 97.7951% ( 7) 00:08:22.637 12502.252 - 12552.665: 97.8241% ( 5) 00:08:22.637 12552.665 - 12603.077: 97.8472% ( 4) 00:08:22.637 12603.077 - 12653.489: 97.8704% ( 4) 00:08:22.637 12653.489 - 12703.902: 97.9225% ( 9) 00:08:22.637 12703.902 - 12754.314: 97.9688% ( 8) 00:08:22.637 12754.314 - 12804.726: 98.0150% ( 8) 00:08:22.637 12804.726 - 12855.138: 98.0729% ( 10) 00:08:22.637 12855.138 - 12905.551: 98.1829% ( 19) 00:08:22.637 12905.551 - 13006.375: 98.3970% ( 37) 00:08:22.637 13006.375 - 13107.200: 98.5822% ( 32) 00:08:22.637 13107.200 - 13208.025: 98.6921% ( 19) 00:08:22.637 13208.025 - 13308.849: 98.7384% ( 8) 00:08:22.637 13308.849 - 13409.674: 98.8194% ( 14) 00:08:22.637 13409.674 - 13510.498: 98.9120% ( 16) 00:08:22.637 13510.498 - 13611.323: 98.9988% ( 15) 00:08:22.637 13611.323 - 13712.148: 99.0567% ( 10) 00:08:22.637 13712.148 - 13812.972: 99.1319% ( 13) 00:08:22.637 13812.972 - 13913.797: 99.1667% ( 6) 00:08:22.637 13913.797 - 14014.622: 99.1840% ( 3) 00:08:22.637 14014.622 - 14115.446: 99.2072% ( 4) 00:08:22.637 14115.446 - 14216.271: 99.2245% ( 3) 00:08:22.637 14216.271 - 14317.095: 99.2477% ( 4) 00:08:22.637 14317.095 - 14417.920: 99.2535% ( 1) 00:08:22.637 14417.920 - 14518.745: 99.2593% ( 1) 00:08:22.637 15123.692 - 15224.517: 99.3519% ( 16) 00:08:22.637 15224.517 - 15325.342: 99.4444% ( 16) 00:08:22.637 15325.342 - 15426.166: 99.4907% ( 8) 00:08:22.637 15426.166 - 15526.991: 99.5197% ( 5) 00:08:22.637 15526.991 - 15627.815: 99.5544% ( 6) 00:08:22.637 15627.815 - 15728.640: 99.5833% ( 5) 00:08:22.637 15728.640 - 15829.465: 99.6181% ( 6) 00:08:22.637 15829.465 - 15930.289: 99.6296% ( 2) 00:08:22.637 20971.520 - 21072.345: 99.6412% ( 2) 00:08:22.637 21072.345 - 21173.169: 99.7396% ( 17) 00:08:22.637 21173.169 - 21273.994: 99.8495% ( 19) 00:08:22.637 21273.994 - 21374.818: 99.9479% ( 17) 00:08:22.637 21374.818 - 21475.643: 100.0000% ( 9) 00:08:22.637 00:08:22.637 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:22.637 ============================================================================== 00:08:22.637 Range in us Cumulative IO count 00:08:22.637 4612.726 - 4637.932: 0.0058% ( 1) 00:08:22.637 4637.932 - 4663.138: 0.0174% ( 2) 00:08:22.637 4663.138 - 4688.345: 0.0347% ( 3) 00:08:22.637 4688.345 - 4713.551: 0.0463% ( 2) 00:08:22.637 4713.551 - 4738.757: 0.0579% ( 2) 00:08:22.637 4738.757 - 4763.963: 0.0694% ( 2) 00:08:22.637 4763.963 - 4789.169: 0.1100% ( 7) 00:08:22.637 4789.169 - 4814.375: 0.1678% ( 10) 00:08:22.637 4814.375 - 4839.582: 0.2373% ( 12) 00:08:22.637 4839.582 - 4864.788: 0.2836% ( 8) 00:08:22.637 4864.788 - 4889.994: 0.3009% ( 3) 00:08:22.637 4889.994 - 4915.200: 0.3125% ( 2) 00:08:22.637 4915.200 - 4940.406: 0.3241% ( 2) 00:08:22.637 4940.406 - 4965.612: 0.3356% ( 2) 00:08:22.637 4965.612 - 4990.818: 0.3472% ( 2) 00:08:22.637 4990.818 - 5016.025: 0.3588% ( 2) 00:08:22.637 5016.025 - 5041.231: 0.3704% ( 2) 00:08:22.637 5797.415 - 5822.622: 0.3762% ( 1) 00:08:22.637 5822.622 - 5847.828: 0.3819% ( 1) 00:08:22.637 5873.034 - 5898.240: 0.3993% ( 3) 00:08:22.637 5898.240 - 5923.446: 0.4456% ( 8) 00:08:22.637 5923.446 - 5948.652: 0.4977% ( 9) 00:08:22.637 5948.652 - 5973.858: 0.5729% ( 13) 00:08:22.637 5973.858 - 5999.065: 0.6597% ( 15) 00:08:22.637 5999.065 - 6024.271: 0.7581% ( 17) 00:08:22.637 6024.271 - 6049.477: 0.8565% ( 17) 00:08:22.637 6049.477 - 6074.683: 1.1285% ( 47) 00:08:22.637 6074.683 - 6099.889: 1.3137% ( 32) 00:08:22.637 6099.889 - 6125.095: 1.5856% ( 47) 00:08:22.637 6125.095 - 6150.302: 1.9734% ( 67) 00:08:22.637 6150.302 - 6175.508: 2.4421% ( 81) 00:08:22.637 6175.508 - 6200.714: 2.9167% ( 82) 00:08:22.637 6200.714 - 6225.920: 3.4433% ( 91) 00:08:22.637 6225.920 - 6251.126: 4.3056% ( 149) 00:08:22.637 6251.126 - 6276.332: 5.0347% ( 126) 00:08:22.637 6276.332 - 6301.538: 6.0185% ( 170) 00:08:22.637 6301.538 - 6326.745: 7.3380% ( 228) 00:08:22.637 6326.745 - 6351.951: 8.5417% ( 208) 00:08:22.637 6351.951 - 6377.157: 9.8438% ( 225) 00:08:22.637 6377.157 - 6402.363: 11.0532% ( 209) 00:08:22.637 6402.363 - 6427.569: 12.5694% ( 262) 00:08:22.637 6427.569 - 6452.775: 13.9120% ( 232) 00:08:22.637 6452.775 - 6503.188: 17.5116% ( 622) 00:08:22.637 6503.188 - 6553.600: 20.1620% ( 458) 00:08:22.637 6553.600 - 6604.012: 23.2755% ( 538) 00:08:22.637 6604.012 - 6654.425: 26.4178% ( 543) 00:08:22.637 6654.425 - 6704.837: 29.2130% ( 483) 00:08:22.637 6704.837 - 6755.249: 32.4769% ( 564) 00:08:22.637 6755.249 - 6805.662: 35.6713% ( 552) 00:08:22.637 6805.662 - 6856.074: 39.2940% ( 626) 00:08:22.637 6856.074 - 6906.486: 43.3102% ( 694) 00:08:22.637 6906.486 - 6956.898: 47.9225% ( 797) 00:08:22.637 6956.898 - 7007.311: 52.0139% ( 707) 00:08:22.637 7007.311 - 7057.723: 56.5914% ( 791) 00:08:22.637 7057.723 - 7108.135: 60.1389% ( 613) 00:08:22.638 7108.135 - 7158.548: 63.6690% ( 610) 00:08:22.638 7158.548 - 7208.960: 67.1354% ( 599) 00:08:22.638 7208.960 - 7259.372: 70.3356% ( 553) 00:08:22.638 7259.372 - 7309.785: 72.1181% ( 308) 00:08:22.638 7309.785 - 7360.197: 73.7153% ( 276) 00:08:22.638 7360.197 - 7410.609: 75.0637% ( 233) 00:08:22.638 7410.609 - 7461.022: 76.7882% ( 298) 00:08:22.638 7461.022 - 7511.434: 77.8530% ( 184) 00:08:22.638 7511.434 - 7561.846: 78.9583% ( 191) 00:08:22.638 7561.846 - 7612.258: 79.8380% ( 152) 00:08:22.638 7612.258 - 7662.671: 80.5903% ( 130) 00:08:22.638 7662.671 - 7713.083: 81.5046% ( 158) 00:08:22.638 7713.083 - 7763.495: 82.3380% ( 144) 00:08:22.638 7763.495 - 7813.908: 82.9398% ( 104) 00:08:22.638 7813.908 - 7864.320: 83.5822% ( 111) 00:08:22.638 7864.320 - 7914.732: 84.1782% ( 103) 00:08:22.638 7914.732 - 7965.145: 84.7569% ( 100) 00:08:22.638 7965.145 - 8015.557: 85.3414% ( 101) 00:08:22.638 8015.557 - 8065.969: 86.0706% ( 126) 00:08:22.638 8065.969 - 8116.382: 86.5799% ( 88) 00:08:22.638 8116.382 - 8166.794: 86.9734% ( 68) 00:08:22.638 8166.794 - 8217.206: 87.3900% ( 72) 00:08:22.638 8217.206 - 8267.618: 87.5984% ( 36) 00:08:22.638 8267.618 - 8318.031: 87.8819% ( 49) 00:08:22.638 8318.031 - 8368.443: 88.1539% ( 47) 00:08:22.638 8368.443 - 8418.855: 88.3391% ( 32) 00:08:22.638 8418.855 - 8469.268: 88.4780% ( 24) 00:08:22.638 8469.268 - 8519.680: 88.6979% ( 38) 00:08:22.638 8519.680 - 8570.092: 88.9062% ( 36) 00:08:22.638 8570.092 - 8620.505: 89.1840% ( 48) 00:08:22.638 8620.505 - 8670.917: 89.4850% ( 52) 00:08:22.638 8670.917 - 8721.329: 89.8553% ( 64) 00:08:22.638 8721.329 - 8771.742: 90.0694% ( 37) 00:08:22.638 8771.742 - 8822.154: 90.2373% ( 29) 00:08:22.638 8822.154 - 8872.566: 90.3704% ( 23) 00:08:22.638 8872.566 - 8922.978: 90.4977% ( 22) 00:08:22.638 8922.978 - 8973.391: 90.6076% ( 19) 00:08:22.638 8973.391 - 9023.803: 90.7350% ( 22) 00:08:22.638 9023.803 - 9074.215: 90.9664% ( 40) 00:08:22.638 9074.215 - 9124.628: 91.1169% ( 26) 00:08:22.638 9124.628 - 9175.040: 91.2442% ( 22) 00:08:22.638 9175.040 - 9225.452: 91.3831% ( 24) 00:08:22.638 9225.452 - 9275.865: 91.5162% ( 23) 00:08:22.638 9275.865 - 9326.277: 91.6377% ( 21) 00:08:22.638 9326.277 - 9376.689: 91.7361% ( 17) 00:08:22.638 9376.689 - 9427.102: 91.8229% ( 15) 00:08:22.638 9427.102 - 9477.514: 92.1817% ( 62) 00:08:22.638 9477.514 - 9527.926: 92.4190% ( 41) 00:08:22.638 9527.926 - 9578.338: 92.6505% ( 40) 00:08:22.638 9578.338 - 9628.751: 92.7778% ( 22) 00:08:22.638 9628.751 - 9679.163: 92.9514% ( 30) 00:08:22.638 9679.163 - 9729.575: 93.2002% ( 43) 00:08:22.638 9729.575 - 9779.988: 93.2928% ( 16) 00:08:22.638 9779.988 - 9830.400: 93.3507% ( 10) 00:08:22.638 9830.400 - 9880.812: 93.4317% ( 14) 00:08:22.638 9880.812 - 9931.225: 93.4838% ( 9) 00:08:22.638 9931.225 - 9981.637: 93.5359% ( 9) 00:08:22.638 9981.637 - 10032.049: 93.5822% ( 8) 00:08:22.638 10032.049 - 10082.462: 93.6227% ( 7) 00:08:22.638 10082.462 - 10132.874: 93.6690% ( 8) 00:08:22.638 10132.874 - 10183.286: 93.7037% ( 6) 00:08:22.638 10183.286 - 10233.698: 93.7558% ( 9) 00:08:22.638 10233.698 - 10284.111: 93.7963% ( 7) 00:08:22.638 10284.111 - 10334.523: 93.8657% ( 12) 00:08:22.638 10334.523 - 10384.935: 93.9236% ( 10) 00:08:22.638 10384.935 - 10435.348: 94.1146% ( 33) 00:08:22.638 10435.348 - 10485.760: 94.1667% ( 9) 00:08:22.638 10485.760 - 10536.172: 94.2477% ( 14) 00:08:22.638 10536.172 - 10586.585: 94.3171% ( 12) 00:08:22.638 10586.585 - 10636.997: 94.3808% ( 11) 00:08:22.638 10636.997 - 10687.409: 94.4965% ( 20) 00:08:22.638 10687.409 - 10737.822: 94.5428% ( 8) 00:08:22.638 10737.822 - 10788.234: 94.5775% ( 6) 00:08:22.638 10788.234 - 10838.646: 94.6701% ( 16) 00:08:22.638 10838.646 - 10889.058: 94.7801% ( 19) 00:08:22.638 10889.058 - 10939.471: 94.9248% ( 25) 00:08:22.638 10939.471 - 10989.883: 95.1562% ( 40) 00:08:22.638 10989.883 - 11040.295: 95.3530% ( 34) 00:08:22.638 11040.295 - 11090.708: 95.5266% ( 30) 00:08:22.638 11090.708 - 11141.120: 95.7118% ( 32) 00:08:22.638 11141.120 - 11191.532: 95.8449% ( 23) 00:08:22.638 11191.532 - 11241.945: 95.9433% ( 17) 00:08:22.638 11241.945 - 11292.357: 96.0590% ( 20) 00:08:22.638 11292.357 - 11342.769: 96.1458% ( 15) 00:08:22.638 11342.769 - 11393.182: 96.2153% ( 12) 00:08:22.638 11393.182 - 11443.594: 96.4120% ( 34) 00:08:22.638 11443.594 - 11494.006: 96.5220% ( 19) 00:08:22.638 11494.006 - 11544.418: 96.5972% ( 13) 00:08:22.638 11544.418 - 11594.831: 96.6551% ( 10) 00:08:22.638 11594.831 - 11645.243: 96.7130% ( 10) 00:08:22.638 11645.243 - 11695.655: 96.7766% ( 11) 00:08:22.638 11695.655 - 11746.068: 96.8403% ( 11) 00:08:22.638 11746.068 - 11796.480: 96.8981% ( 10) 00:08:22.638 11796.480 - 11846.892: 96.9734% ( 13) 00:08:22.638 11846.892 - 11897.305: 97.0602% ( 15) 00:08:22.638 11897.305 - 11947.717: 97.1354% ( 13) 00:08:22.638 11947.717 - 11998.129: 97.2106% ( 13) 00:08:22.638 11998.129 - 12048.542: 97.3611% ( 26) 00:08:22.638 12048.542 - 12098.954: 97.5637% ( 35) 00:08:22.638 12098.954 - 12149.366: 97.6215% ( 10) 00:08:22.638 12149.366 - 12199.778: 97.7025% ( 14) 00:08:22.638 12199.778 - 12250.191: 97.8009% ( 17) 00:08:22.638 12250.191 - 12300.603: 97.8356% ( 6) 00:08:22.638 12300.603 - 12351.015: 97.8704% ( 6) 00:08:22.638 12351.015 - 12401.428: 97.9051% ( 6) 00:08:22.638 12401.428 - 12451.840: 97.9398% ( 6) 00:08:22.638 12451.840 - 12502.252: 97.9630% ( 4) 00:08:22.638 12502.252 - 12552.665: 97.9977% ( 6) 00:08:22.638 12552.665 - 12603.077: 98.0324% ( 6) 00:08:22.638 12603.077 - 12653.489: 98.0903% ( 10) 00:08:22.638 12653.489 - 12703.902: 98.1192% ( 5) 00:08:22.638 12703.902 - 12754.314: 98.1597% ( 7) 00:08:22.638 12754.314 - 12804.726: 98.2002% ( 7) 00:08:22.638 12804.726 - 12855.138: 98.2350% ( 6) 00:08:22.638 12855.138 - 12905.551: 98.3275% ( 16) 00:08:22.638 12905.551 - 13006.375: 98.4664% ( 24) 00:08:22.638 13006.375 - 13107.200: 98.6690% ( 35) 00:08:22.638 13107.200 - 13208.025: 98.8137% ( 25) 00:08:22.638 13208.025 - 13308.849: 98.9699% ( 27) 00:08:22.638 13308.849 - 13409.674: 99.0741% ( 18) 00:08:22.638 13409.674 - 13510.498: 99.1435% ( 12) 00:08:22.638 13510.498 - 13611.323: 99.1956% ( 9) 00:08:22.638 13611.323 - 13712.148: 99.2130% ( 3) 00:08:22.638 13712.148 - 13812.972: 99.2361% ( 4) 00:08:22.638 13812.972 - 13913.797: 99.2535% ( 3) 00:08:22.638 13913.797 - 14014.622: 99.2593% ( 1) 00:08:22.638 15022.868 - 15123.692: 99.2650% ( 1) 00:08:22.638 15224.517 - 15325.342: 99.2882% ( 4) 00:08:22.638 15325.342 - 15426.166: 99.3171% ( 5) 00:08:22.638 15426.166 - 15526.991: 99.3519% ( 6) 00:08:22.638 15526.991 - 15627.815: 99.3808% ( 5) 00:08:22.638 15627.815 - 15728.640: 99.4155% ( 6) 00:08:22.638 15728.640 - 15829.465: 99.4502% ( 6) 00:08:22.638 15829.465 - 15930.289: 99.4850% ( 6) 00:08:22.638 15930.289 - 16031.114: 99.5139% ( 5) 00:08:22.638 16031.114 - 16131.938: 99.5370% ( 4) 00:08:22.638 16131.938 - 16232.763: 99.5602% ( 4) 00:08:22.638 16232.763 - 16333.588: 99.5775% ( 3) 00:08:22.638 16333.588 - 16434.412: 99.6007% ( 4) 00:08:22.638 16434.412 - 16535.237: 99.6238% ( 4) 00:08:22.638 16535.237 - 16636.062: 99.6296% ( 1) 00:08:22.638 20568.222 - 20669.046: 99.6586% ( 5) 00:08:22.638 20669.046 - 20769.871: 99.6817% ( 4) 00:08:22.638 20769.871 - 20870.695: 99.7106% ( 5) 00:08:22.638 20870.695 - 20971.520: 99.7627% ( 9) 00:08:22.638 20971.520 - 21072.345: 99.8264% ( 11) 00:08:22.638 21072.345 - 21173.169: 99.8495% ( 4) 00:08:22.638 21173.169 - 21273.994: 99.8785% ( 5) 00:08:22.638 21273.994 - 21374.818: 99.9074% ( 5) 00:08:22.638 21374.818 - 21475.643: 99.9421% ( 6) 00:08:22.638 21475.643 - 21576.468: 99.9826% ( 7) 00:08:22.638 21576.468 - 21677.292: 100.0000% ( 3) 00:08:22.638 00:08:22.638 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:22.638 ============================================================================== 00:08:22.638 Range in us Cumulative IO count 00:08:22.638 4259.840 - 4285.046: 0.0058% ( 1) 00:08:22.638 4285.046 - 4310.252: 0.0174% ( 2) 00:08:22.638 4310.252 - 4335.458: 0.0289% ( 2) 00:08:22.638 4335.458 - 4360.665: 0.0405% ( 2) 00:08:22.638 4360.665 - 4385.871: 0.0579% ( 3) 00:08:22.638 4385.871 - 4411.077: 0.1157% ( 10) 00:08:22.638 4411.077 - 4436.283: 0.1736% ( 10) 00:08:22.638 4436.283 - 4461.489: 0.2720% ( 17) 00:08:22.638 4461.489 - 4486.695: 0.2951% ( 4) 00:08:22.638 4486.695 - 4511.902: 0.3067% ( 2) 00:08:22.638 4511.902 - 4537.108: 0.3183% ( 2) 00:08:22.638 4537.108 - 4562.314: 0.3299% ( 2) 00:08:22.638 4562.314 - 4587.520: 0.3414% ( 2) 00:08:22.638 4587.520 - 4612.726: 0.3530% ( 2) 00:08:22.638 4612.726 - 4637.932: 0.3646% ( 2) 00:08:22.638 4637.932 - 4663.138: 0.3704% ( 1) 00:08:22.638 5620.972 - 5646.178: 0.3762% ( 1) 00:08:22.638 5747.003 - 5772.209: 0.3819% ( 1) 00:08:22.638 5772.209 - 5797.415: 0.3877% ( 1) 00:08:22.638 5797.415 - 5822.622: 0.3935% ( 1) 00:08:22.638 5822.622 - 5847.828: 0.4109% ( 3) 00:08:22.638 5847.828 - 5873.034: 0.4282% ( 3) 00:08:22.638 5873.034 - 5898.240: 0.4572% ( 5) 00:08:22.638 5898.240 - 5923.446: 0.4745% ( 3) 00:08:22.638 5923.446 - 5948.652: 0.4919% ( 3) 00:08:22.638 5948.652 - 5973.858: 0.5208% ( 5) 00:08:22.638 5973.858 - 5999.065: 0.6192% ( 17) 00:08:22.638 5999.065 - 6024.271: 0.7060% ( 15) 00:08:22.638 6024.271 - 6049.477: 0.8160% ( 19) 00:08:22.638 6049.477 - 6074.683: 0.9606% ( 25) 00:08:22.638 6074.683 - 6099.889: 1.1111% ( 26) 00:08:22.638 6099.889 - 6125.095: 1.3310% ( 38) 00:08:22.638 6125.095 - 6150.302: 1.5856% ( 44) 00:08:22.639 6150.302 - 6175.508: 2.0891% ( 87) 00:08:22.639 6175.508 - 6200.714: 2.6331% ( 94) 00:08:22.639 6200.714 - 6225.920: 3.2697% ( 110) 00:08:22.639 6225.920 - 6251.126: 4.0394% ( 133) 00:08:22.639 6251.126 - 6276.332: 4.9074% ( 150) 00:08:22.639 6276.332 - 6301.538: 6.1458% ( 214) 00:08:22.639 6301.538 - 6326.745: 7.3322% ( 205) 00:08:22.639 6326.745 - 6351.951: 8.3391% ( 174) 00:08:22.639 6351.951 - 6377.157: 9.9479% ( 278) 00:08:22.639 6377.157 - 6402.363: 11.7130% ( 305) 00:08:22.639 6402.363 - 6427.569: 13.3044% ( 275) 00:08:22.639 6427.569 - 6452.775: 14.5949% ( 223) 00:08:22.639 6452.775 - 6503.188: 18.1192% ( 609) 00:08:22.639 6503.188 - 6553.600: 20.4514% ( 403) 00:08:22.639 6553.600 - 6604.012: 23.1134% ( 460) 00:08:22.639 6604.012 - 6654.425: 25.9664% ( 493) 00:08:22.639 6654.425 - 6704.837: 28.9062% ( 508) 00:08:22.639 6704.837 - 6755.249: 31.9560% ( 527) 00:08:22.639 6755.249 - 6805.662: 34.9190% ( 512) 00:08:22.639 6805.662 - 6856.074: 38.2697% ( 579) 00:08:22.639 6856.074 - 6906.486: 43.1250% ( 839) 00:08:22.639 6906.486 - 6956.898: 47.1759% ( 700) 00:08:22.639 6956.898 - 7007.311: 51.6782% ( 778) 00:08:22.639 7007.311 - 7057.723: 56.2905% ( 797) 00:08:22.639 7057.723 - 7108.135: 60.5498% ( 736) 00:08:22.639 7108.135 - 7158.548: 64.3576% ( 658) 00:08:22.639 7158.548 - 7208.960: 67.0602% ( 467) 00:08:22.639 7208.960 - 7259.372: 69.8148% ( 476) 00:08:22.639 7259.372 - 7309.785: 72.6273% ( 486) 00:08:22.639 7309.785 - 7360.197: 74.1725% ( 267) 00:08:22.639 7360.197 - 7410.609: 75.5440% ( 237) 00:08:22.639 7410.609 - 7461.022: 76.8692% ( 229) 00:08:22.639 7461.022 - 7511.434: 77.7894% ( 159) 00:08:22.639 7511.434 - 7561.846: 78.6053% ( 141) 00:08:22.639 7561.846 - 7612.258: 79.6065% ( 173) 00:08:22.639 7612.258 - 7662.671: 80.6308% ( 177) 00:08:22.639 7662.671 - 7713.083: 81.3194% ( 119) 00:08:22.639 7713.083 - 7763.495: 82.0833% ( 132) 00:08:22.639 7763.495 - 7813.908: 82.8935% ( 140) 00:08:22.639 7813.908 - 7864.320: 83.4086% ( 89) 00:08:22.639 7864.320 - 7914.732: 83.9699% ( 97) 00:08:22.639 7914.732 - 7965.145: 84.6586% ( 119) 00:08:22.639 7965.145 - 8015.557: 85.3935% ( 127) 00:08:22.639 8015.557 - 8065.969: 86.1574% ( 132) 00:08:22.639 8065.969 - 8116.382: 87.0023% ( 146) 00:08:22.639 8116.382 - 8166.794: 87.3900% ( 67) 00:08:22.639 8166.794 - 8217.206: 87.6968% ( 53) 00:08:22.639 8217.206 - 8267.618: 88.0266% ( 57) 00:08:22.639 8267.618 - 8318.031: 88.3507% ( 56) 00:08:22.639 8318.031 - 8368.443: 88.5417% ( 33) 00:08:22.639 8368.443 - 8418.855: 88.7269% ( 32) 00:08:22.639 8418.855 - 8469.268: 88.9815% ( 44) 00:08:22.639 8469.268 - 8519.680: 89.1377% ( 27) 00:08:22.639 8519.680 - 8570.092: 89.3345% ( 34) 00:08:22.639 8570.092 - 8620.505: 89.4965% ( 28) 00:08:22.639 8620.505 - 8670.917: 89.6586% ( 28) 00:08:22.639 8670.917 - 8721.329: 89.9306% ( 47) 00:08:22.639 8721.329 - 8771.742: 90.0984% ( 29) 00:08:22.639 8771.742 - 8822.154: 90.1910% ( 16) 00:08:22.639 8822.154 - 8872.566: 90.3472% ( 27) 00:08:22.639 8872.566 - 8922.978: 90.5093% ( 28) 00:08:22.639 8922.978 - 8973.391: 90.6308% ( 21) 00:08:22.639 8973.391 - 9023.803: 90.7928% ( 28) 00:08:22.639 9023.803 - 9074.215: 90.9201% ( 22) 00:08:22.639 9074.215 - 9124.628: 91.0301% ( 19) 00:08:22.639 9124.628 - 9175.040: 91.1400% ( 19) 00:08:22.639 9175.040 - 9225.452: 91.3194% ( 31) 00:08:22.639 9225.452 - 9275.865: 91.5451% ( 39) 00:08:22.639 9275.865 - 9326.277: 91.8287% ( 49) 00:08:22.639 9326.277 - 9376.689: 92.1412% ( 54) 00:08:22.639 9376.689 - 9427.102: 92.2743% ( 23) 00:08:22.639 9427.102 - 9477.514: 92.3553% ( 14) 00:08:22.639 9477.514 - 9527.926: 92.4132% ( 10) 00:08:22.639 9527.926 - 9578.338: 92.4769% ( 11) 00:08:22.639 9578.338 - 9628.751: 92.5174% ( 7) 00:08:22.639 9628.751 - 9679.163: 92.5579% ( 7) 00:08:22.639 9679.163 - 9729.575: 92.5926% ( 6) 00:08:22.639 9729.575 - 9779.988: 92.6505% ( 10) 00:08:22.639 9779.988 - 9830.400: 92.7199% ( 12) 00:08:22.639 9830.400 - 9880.812: 92.8530% ( 23) 00:08:22.639 9880.812 - 9931.225: 92.9572% ( 18) 00:08:22.639 9931.225 - 9981.637: 93.1539% ( 34) 00:08:22.639 9981.637 - 10032.049: 93.3391% ( 32) 00:08:22.639 10032.049 - 10082.462: 93.5069% ( 29) 00:08:22.639 10082.462 - 10132.874: 93.6632% ( 27) 00:08:22.639 10132.874 - 10183.286: 93.8715% ( 36) 00:08:22.639 10183.286 - 10233.698: 93.9815% ( 19) 00:08:22.639 10233.698 - 10284.111: 94.0625% ( 14) 00:08:22.639 10284.111 - 10334.523: 94.1435% ( 14) 00:08:22.639 10334.523 - 10384.935: 94.2130% ( 12) 00:08:22.639 10384.935 - 10435.348: 94.3056% ( 16) 00:08:22.639 10435.348 - 10485.760: 94.3981% ( 16) 00:08:22.639 10485.760 - 10536.172: 94.4734% ( 13) 00:08:22.639 10536.172 - 10586.585: 94.5775% ( 18) 00:08:22.639 10586.585 - 10636.997: 94.6759% ( 17) 00:08:22.639 10636.997 - 10687.409: 94.7743% ( 17) 00:08:22.639 10687.409 - 10737.822: 94.8843% ( 19) 00:08:22.639 10737.822 - 10788.234: 94.9421% ( 10) 00:08:22.639 10788.234 - 10838.646: 95.0116% ( 12) 00:08:22.639 10838.646 - 10889.058: 95.0579% ( 8) 00:08:22.639 10889.058 - 10939.471: 95.1157% ( 10) 00:08:22.639 10939.471 - 10989.883: 95.1562% ( 7) 00:08:22.639 10989.883 - 11040.295: 95.1910% ( 6) 00:08:22.639 11040.295 - 11090.708: 95.2431% ( 9) 00:08:22.639 11090.708 - 11141.120: 95.3299% ( 15) 00:08:22.639 11141.120 - 11191.532: 95.4340% ( 18) 00:08:22.639 11191.532 - 11241.945: 95.5671% ( 23) 00:08:22.639 11241.945 - 11292.357: 95.8218% ( 44) 00:08:22.639 11292.357 - 11342.769: 95.9722% ( 26) 00:08:22.639 11342.769 - 11393.182: 96.1690% ( 34) 00:08:22.639 11393.182 - 11443.594: 96.2731% ( 18) 00:08:22.639 11443.594 - 11494.006: 96.3947% ( 21) 00:08:22.639 11494.006 - 11544.418: 96.5625% ( 29) 00:08:22.639 11544.418 - 11594.831: 96.7072% ( 25) 00:08:22.639 11594.831 - 11645.243: 96.9155% ( 36) 00:08:22.639 11645.243 - 11695.655: 97.0312% ( 20) 00:08:22.639 11695.655 - 11746.068: 97.1296% ( 17) 00:08:22.639 11746.068 - 11796.480: 97.2338% ( 18) 00:08:22.639 11796.480 - 11846.892: 97.3322% ( 17) 00:08:22.639 11846.892 - 11897.305: 97.4595% ( 22) 00:08:22.639 11897.305 - 11947.717: 97.5694% ( 19) 00:08:22.639 11947.717 - 11998.129: 97.7083% ( 24) 00:08:22.639 11998.129 - 12048.542: 97.9688% ( 45) 00:08:22.639 12048.542 - 12098.954: 98.0556% ( 15) 00:08:22.639 12098.954 - 12149.366: 98.1250% ( 12) 00:08:22.639 12149.366 - 12199.778: 98.1887% ( 11) 00:08:22.639 12199.778 - 12250.191: 98.2350% ( 8) 00:08:22.639 12250.191 - 12300.603: 98.3160% ( 14) 00:08:22.639 12300.603 - 12351.015: 98.3507% ( 6) 00:08:22.639 12351.015 - 12401.428: 98.3738% ( 4) 00:08:22.639 12401.428 - 12451.840: 98.3854% ( 2) 00:08:22.639 12451.840 - 12502.252: 98.3970% ( 2) 00:08:22.639 12502.252 - 12552.665: 98.4201% ( 4) 00:08:22.639 12552.665 - 12603.077: 98.4433% ( 4) 00:08:22.639 12603.077 - 12653.489: 98.4664% ( 4) 00:08:22.639 12653.489 - 12703.902: 98.4954% ( 5) 00:08:22.639 12703.902 - 12754.314: 98.5243% ( 5) 00:08:22.639 12754.314 - 12804.726: 98.5532% ( 5) 00:08:22.639 12804.726 - 12855.138: 98.5822% ( 5) 00:08:22.639 12855.138 - 12905.551: 98.6053% ( 4) 00:08:22.639 12905.551 - 13006.375: 98.6632% ( 10) 00:08:22.639 13006.375 - 13107.200: 98.7211% ( 10) 00:08:22.639 13107.200 - 13208.025: 98.7616% ( 7) 00:08:22.639 13208.025 - 13308.849: 98.8021% ( 7) 00:08:22.639 13308.849 - 13409.674: 98.8426% ( 7) 00:08:22.639 13409.674 - 13510.498: 98.9062% ( 11) 00:08:22.639 13510.498 - 13611.323: 98.9294% ( 4) 00:08:22.639 13611.323 - 13712.148: 98.9525% ( 4) 00:08:22.639 13712.148 - 13812.972: 98.9815% ( 5) 00:08:22.639 13812.972 - 13913.797: 99.0046% ( 4) 00:08:22.639 13913.797 - 14014.622: 99.0799% ( 13) 00:08:22.639 14014.622 - 14115.446: 99.1435% ( 11) 00:08:22.639 14115.446 - 14216.271: 99.1782% ( 6) 00:08:22.639 14216.271 - 14317.095: 99.1956% ( 3) 00:08:22.639 14317.095 - 14417.920: 99.2188% ( 4) 00:08:22.639 14417.920 - 14518.745: 99.2419% ( 4) 00:08:22.639 14518.745 - 14619.569: 99.2593% ( 3) 00:08:22.640 15022.868 - 15123.692: 99.2650% ( 1) 00:08:22.640 15123.692 - 15224.517: 99.2940% ( 5) 00:08:22.640 15224.517 - 15325.342: 99.3229% ( 5) 00:08:22.640 15325.342 - 15426.166: 99.3576% ( 6) 00:08:22.640 15426.166 - 15526.991: 99.3866% ( 5) 00:08:22.640 15526.991 - 15627.815: 99.4155% ( 5) 00:08:22.640 15627.815 - 15728.640: 99.4444% ( 5) 00:08:22.640 15728.640 - 15829.465: 99.4792% ( 6) 00:08:22.640 15829.465 - 15930.289: 99.5370% ( 10) 00:08:22.640 15930.289 - 16031.114: 99.5602% ( 4) 00:08:22.640 16031.114 - 16131.938: 99.5891% ( 5) 00:08:22.640 16131.938 - 16232.763: 99.6123% ( 4) 00:08:22.640 16232.763 - 16333.588: 99.6296% ( 3) 00:08:22.640 20366.572 - 20467.397: 99.6470% ( 3) 00:08:22.640 20467.397 - 20568.222: 99.6759% ( 5) 00:08:22.640 20568.222 - 20669.046: 99.7049% ( 5) 00:08:22.640 20669.046 - 20769.871: 99.7743% ( 12) 00:08:22.640 20769.871 - 20870.695: 99.8438% ( 12) 00:08:22.640 20870.695 - 20971.520: 99.8669% ( 4) 00:08:22.640 20971.520 - 21072.345: 99.8958% ( 5) 00:08:22.640 21072.345 - 21173.169: 99.9190% ( 4) 00:08:22.640 21173.169 - 21273.994: 99.9479% ( 5) 00:08:22.640 21273.994 - 21374.818: 99.9711% ( 4) 00:08:22.640 21374.818 - 21475.643: 100.0000% ( 5) 00:08:22.640 00:08:22.640 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:22.640 ============================================================================== 00:08:22.640 Range in us Cumulative IO count 00:08:22.640 4007.778 - 4032.985: 0.0116% ( 2) 00:08:22.640 4032.985 - 4058.191: 0.0231% ( 2) 00:08:22.640 4058.191 - 4083.397: 0.0405% ( 3) 00:08:22.640 4083.397 - 4108.603: 0.0694% ( 5) 00:08:22.640 4108.603 - 4133.809: 0.0984% ( 5) 00:08:22.640 4133.809 - 4159.015: 0.1794% ( 14) 00:08:22.640 4159.015 - 4184.222: 0.2778% ( 17) 00:08:22.640 4184.222 - 4209.428: 0.3009% ( 4) 00:08:22.640 4209.428 - 4234.634: 0.3125% ( 2) 00:08:22.640 4234.634 - 4259.840: 0.3241% ( 2) 00:08:22.640 4259.840 - 4285.046: 0.3356% ( 2) 00:08:22.640 4285.046 - 4310.252: 0.3472% ( 2) 00:08:22.640 4310.252 - 4335.458: 0.3530% ( 1) 00:08:22.640 4335.458 - 4360.665: 0.3646% ( 2) 00:08:22.640 4360.665 - 4385.871: 0.3704% ( 1) 00:08:22.640 5797.415 - 5822.622: 0.3877% ( 3) 00:08:22.640 5822.622 - 5847.828: 0.4109% ( 4) 00:08:22.640 5847.828 - 5873.034: 0.4282% ( 3) 00:08:22.640 5873.034 - 5898.240: 0.4514% ( 4) 00:08:22.640 5898.240 - 5923.446: 0.4919% ( 7) 00:08:22.640 5923.446 - 5948.652: 0.5208% ( 5) 00:08:22.640 5948.652 - 5973.858: 0.5556% ( 6) 00:08:22.640 5973.858 - 5999.065: 0.6019% ( 8) 00:08:22.640 5999.065 - 6024.271: 0.7581% ( 27) 00:08:22.640 6024.271 - 6049.477: 0.8391% ( 14) 00:08:22.640 6049.477 - 6074.683: 0.9664% ( 22) 00:08:22.640 6074.683 - 6099.889: 1.1227% ( 27) 00:08:22.640 6099.889 - 6125.095: 1.3310% ( 36) 00:08:22.640 6125.095 - 6150.302: 1.6667% ( 58) 00:08:22.640 6150.302 - 6175.508: 2.1354% ( 81) 00:08:22.640 6175.508 - 6200.714: 2.7025% ( 98) 00:08:22.640 6200.714 - 6225.920: 3.3275% ( 108) 00:08:22.640 6225.920 - 6251.126: 4.3113% ( 170) 00:08:22.640 6251.126 - 6276.332: 5.2662% ( 165) 00:08:22.640 6276.332 - 6301.538: 6.4120% ( 198) 00:08:22.640 6301.538 - 6326.745: 7.6215% ( 209) 00:08:22.640 6326.745 - 6351.951: 8.8715% ( 216) 00:08:22.640 6351.951 - 6377.157: 10.2083% ( 231) 00:08:22.640 6377.157 - 6402.363: 11.7998% ( 275) 00:08:22.640 6402.363 - 6427.569: 13.5880% ( 309) 00:08:22.640 6427.569 - 6452.775: 15.1968% ( 278) 00:08:22.640 6452.775 - 6503.188: 17.7604% ( 443) 00:08:22.640 6503.188 - 6553.600: 19.9884% ( 385) 00:08:22.640 6553.600 - 6604.012: 22.6852% ( 466) 00:08:22.640 6604.012 - 6654.425: 25.3067% ( 453) 00:08:22.640 6654.425 - 6704.837: 28.4549% ( 544) 00:08:22.640 6704.837 - 6755.249: 31.6377% ( 550) 00:08:22.640 6755.249 - 6805.662: 34.5718% ( 507) 00:08:22.640 6805.662 - 6856.074: 38.7095% ( 715) 00:08:22.640 6856.074 - 6906.486: 42.4711% ( 650) 00:08:22.640 6906.486 - 6956.898: 46.9734% ( 778) 00:08:22.640 6956.898 - 7007.311: 52.7025% ( 990) 00:08:22.640 7007.311 - 7057.723: 56.9387% ( 732) 00:08:22.640 7057.723 - 7108.135: 61.5046% ( 789) 00:08:22.640 7108.135 - 7158.548: 65.2431% ( 646) 00:08:22.640 7158.548 - 7208.960: 68.6400% ( 587) 00:08:22.640 7208.960 - 7259.372: 71.5336% ( 500) 00:08:22.640 7259.372 - 7309.785: 73.7384% ( 381) 00:08:22.640 7309.785 - 7360.197: 75.1678% ( 247) 00:08:22.640 7360.197 - 7410.609: 76.5278% ( 235) 00:08:22.640 7410.609 - 7461.022: 77.6273% ( 190) 00:08:22.640 7461.022 - 7511.434: 78.5243% ( 155) 00:08:22.640 7511.434 - 7561.846: 79.2014% ( 117) 00:08:22.640 7561.846 - 7612.258: 79.9190% ( 124) 00:08:22.640 7612.258 - 7662.671: 80.3819% ( 80) 00:08:22.640 7662.671 - 7713.083: 81.1690% ( 136) 00:08:22.640 7713.083 - 7763.495: 82.0602% ( 154) 00:08:22.640 7763.495 - 7813.908: 83.2118% ( 199) 00:08:22.640 7813.908 - 7864.320: 84.1146% ( 156) 00:08:22.640 7864.320 - 7914.732: 84.8495% ( 127) 00:08:22.640 7914.732 - 7965.145: 85.4977% ( 112) 00:08:22.640 7965.145 - 8015.557: 86.0301% ( 92) 00:08:22.640 8015.557 - 8065.969: 86.7708% ( 128) 00:08:22.640 8065.969 - 8116.382: 87.1412% ( 64) 00:08:22.640 8116.382 - 8166.794: 87.7836% ( 111) 00:08:22.640 8166.794 - 8217.206: 87.9861% ( 35) 00:08:22.640 8217.206 - 8267.618: 88.2292% ( 42) 00:08:22.640 8267.618 - 8318.031: 88.5127% ( 49) 00:08:22.640 8318.031 - 8368.443: 88.7500% ( 41) 00:08:22.640 8368.443 - 8418.855: 88.8889% ( 24) 00:08:22.640 8418.855 - 8469.268: 88.9815% ( 16) 00:08:22.640 8469.268 - 8519.680: 89.1030% ( 21) 00:08:22.640 8519.680 - 8570.092: 89.2766% ( 30) 00:08:22.640 8570.092 - 8620.505: 89.4792% ( 35) 00:08:22.640 8620.505 - 8670.917: 89.7164% ( 41) 00:08:22.640 8670.917 - 8721.329: 89.8322% ( 20) 00:08:22.640 8721.329 - 8771.742: 89.9653% ( 23) 00:08:22.640 8771.742 - 8822.154: 90.0926% ( 22) 00:08:22.640 8822.154 - 8872.566: 90.1968% ( 18) 00:08:22.640 8872.566 - 8922.978: 90.2662% ( 12) 00:08:22.640 8922.978 - 8973.391: 90.3299% ( 11) 00:08:22.640 8973.391 - 9023.803: 90.3877% ( 10) 00:08:22.640 9023.803 - 9074.215: 90.4688% ( 14) 00:08:22.640 9074.215 - 9124.628: 90.7060% ( 41) 00:08:22.640 9124.628 - 9175.040: 90.9375% ( 40) 00:08:22.640 9175.040 - 9225.452: 91.0822% ( 25) 00:08:22.640 9225.452 - 9275.865: 91.1921% ( 19) 00:08:22.640 9275.865 - 9326.277: 91.2963% ( 18) 00:08:22.640 9326.277 - 9376.689: 91.5104% ( 37) 00:08:22.640 9376.689 - 9427.102: 91.6782% ( 29) 00:08:22.640 9427.102 - 9477.514: 91.8750% ( 34) 00:08:22.640 9477.514 - 9527.926: 92.1296% ( 44) 00:08:22.640 9527.926 - 9578.338: 92.2280% ( 17) 00:08:22.640 9578.338 - 9628.751: 92.3785% ( 26) 00:08:22.640 9628.751 - 9679.163: 92.5116% ( 23) 00:08:22.640 9679.163 - 9729.575: 92.7315% ( 38) 00:08:22.640 9729.575 - 9779.988: 92.7894% ( 10) 00:08:22.640 9779.988 - 9830.400: 92.8356% ( 8) 00:08:22.640 9830.400 - 9880.812: 92.9340% ( 17) 00:08:22.640 9880.812 - 9931.225: 93.0440% ( 19) 00:08:22.640 9931.225 - 9981.637: 93.1597% ( 20) 00:08:22.640 9981.637 - 10032.049: 93.2350% ( 13) 00:08:22.640 10032.049 - 10082.462: 93.3738% ( 24) 00:08:22.640 10082.462 - 10132.874: 93.5012% ( 22) 00:08:22.640 10132.874 - 10183.286: 93.6227% ( 21) 00:08:22.640 10183.286 - 10233.698: 93.7211% ( 17) 00:08:22.640 10233.698 - 10284.111: 93.9294% ( 36) 00:08:22.640 10284.111 - 10334.523: 94.0220% ( 16) 00:08:22.640 10334.523 - 10384.935: 94.1030% ( 14) 00:08:22.640 10384.935 - 10435.348: 94.2188% ( 20) 00:08:22.640 10435.348 - 10485.760: 94.3287% ( 19) 00:08:22.640 10485.760 - 10536.172: 94.4271% ( 17) 00:08:22.640 10536.172 - 10586.585: 94.5660% ( 24) 00:08:22.640 10586.585 - 10636.997: 94.6817% ( 20) 00:08:22.640 10636.997 - 10687.409: 94.7569% ( 13) 00:08:22.640 10687.409 - 10737.822: 94.8958% ( 24) 00:08:22.640 10737.822 - 10788.234: 95.0984% ( 35) 00:08:22.640 10788.234 - 10838.646: 95.2894% ( 33) 00:08:22.640 10838.646 - 10889.058: 95.4340% ( 25) 00:08:22.640 10889.058 - 10939.471: 95.6192% ( 32) 00:08:22.640 10939.471 - 10989.883: 95.7755% ( 27) 00:08:22.640 10989.883 - 11040.295: 95.9201% ( 25) 00:08:22.640 11040.295 - 11090.708: 96.0532% ( 23) 00:08:22.640 11090.708 - 11141.120: 96.1748% ( 21) 00:08:22.640 11141.120 - 11191.532: 96.2558% ( 14) 00:08:22.640 11191.532 - 11241.945: 96.3252% ( 12) 00:08:22.640 11241.945 - 11292.357: 96.4120% ( 15) 00:08:22.640 11292.357 - 11342.769: 96.4931% ( 14) 00:08:22.640 11342.769 - 11393.182: 96.5625% ( 12) 00:08:22.640 11393.182 - 11443.594: 96.6262% ( 11) 00:08:22.640 11443.594 - 11494.006: 96.6840% ( 10) 00:08:22.640 11494.006 - 11544.418: 96.7419% ( 10) 00:08:22.640 11544.418 - 11594.831: 96.8403% ( 17) 00:08:22.640 11594.831 - 11645.243: 96.8808% ( 7) 00:08:22.640 11645.243 - 11695.655: 96.9097% ( 5) 00:08:22.640 11695.655 - 11746.068: 96.9213% ( 2) 00:08:22.640 11746.068 - 11796.480: 96.9618% ( 7) 00:08:22.640 11796.480 - 11846.892: 97.0197% ( 10) 00:08:22.640 11846.892 - 11897.305: 97.0891% ( 12) 00:08:22.640 11897.305 - 11947.717: 97.2338% ( 25) 00:08:22.640 11947.717 - 11998.129: 97.2917% ( 10) 00:08:22.640 11998.129 - 12048.542: 97.3495% ( 10) 00:08:22.640 12048.542 - 12098.954: 97.4421% ( 16) 00:08:22.640 12098.954 - 12149.366: 97.5000% ( 10) 00:08:22.640 12149.366 - 12199.778: 97.5694% ( 12) 00:08:22.640 12199.778 - 12250.191: 97.8299% ( 45) 00:08:22.640 12250.191 - 12300.603: 97.8935% ( 11) 00:08:22.640 12300.603 - 12351.015: 97.9514% ( 10) 00:08:22.640 12351.015 - 12401.428: 98.0035% ( 9) 00:08:22.640 12401.428 - 12451.840: 98.0382% ( 6) 00:08:22.640 12451.840 - 12502.252: 98.0671% ( 5) 00:08:22.641 12502.252 - 12552.665: 98.2350% ( 29) 00:08:22.641 12552.665 - 12603.077: 98.2697% ( 6) 00:08:22.641 12603.077 - 12653.489: 98.3102% ( 7) 00:08:22.641 12653.489 - 12703.902: 98.3854% ( 13) 00:08:22.641 12703.902 - 12754.314: 98.4491% ( 11) 00:08:22.641 12754.314 - 12804.726: 98.5127% ( 11) 00:08:22.641 12804.726 - 12855.138: 98.5590% ( 8) 00:08:22.641 12855.138 - 12905.551: 98.6053% ( 8) 00:08:22.641 12905.551 - 13006.375: 98.6748% ( 12) 00:08:22.641 13006.375 - 13107.200: 98.7095% ( 6) 00:08:22.900 13107.200 - 13208.025: 98.7500% ( 7) 00:08:22.900 13208.025 - 13308.849: 98.7905% ( 7) 00:08:22.900 13308.849 - 13409.674: 98.8252% ( 6) 00:08:22.900 13409.674 - 13510.498: 98.8600% ( 6) 00:08:22.900 13510.498 - 13611.323: 98.8889% ( 5) 00:08:22.900 14115.446 - 14216.271: 98.9294% ( 7) 00:08:22.900 14216.271 - 14317.095: 99.1146% ( 32) 00:08:22.900 14317.095 - 14417.920: 99.2188% ( 18) 00:08:22.900 14417.920 - 14518.745: 99.2593% ( 7) 00:08:22.900 14821.218 - 14922.043: 99.2882% ( 5) 00:08:22.900 14922.043 - 15022.868: 99.3113% ( 4) 00:08:22.900 15022.868 - 15123.692: 99.3403% ( 5) 00:08:22.900 15123.692 - 15224.517: 99.3808% ( 7) 00:08:22.900 15224.517 - 15325.342: 99.4097% ( 5) 00:08:22.900 15325.342 - 15426.166: 99.4387% ( 5) 00:08:22.900 15426.166 - 15526.991: 99.4734% ( 6) 00:08:22.900 15526.991 - 15627.815: 99.5023% ( 5) 00:08:22.900 15627.815 - 15728.640: 99.5255% ( 4) 00:08:22.900 15728.640 - 15829.465: 99.5486% ( 4) 00:08:22.900 15829.465 - 15930.289: 99.5718% ( 4) 00:08:22.900 15930.289 - 16031.114: 99.5949% ( 4) 00:08:22.900 16031.114 - 16131.938: 99.6238% ( 5) 00:08:22.900 16131.938 - 16232.763: 99.6296% ( 1) 00:08:22.900 20366.572 - 20467.397: 99.6354% ( 1) 00:08:22.900 20467.397 - 20568.222: 99.6701% ( 6) 00:08:22.900 20568.222 - 20669.046: 99.7859% ( 20) 00:08:22.900 20669.046 - 20769.871: 99.8495% ( 11) 00:08:22.900 20769.871 - 20870.695: 99.8785% ( 5) 00:08:22.900 20870.695 - 20971.520: 99.9016% ( 4) 00:08:22.900 20971.520 - 21072.345: 99.9421% ( 7) 00:08:22.900 21072.345 - 21173.169: 99.9826% ( 7) 00:08:22.900 21173.169 - 21273.994: 99.9942% ( 2) 00:08:22.900 21273.994 - 21374.818: 100.0000% ( 1) 00:08:22.900 00:08:22.900 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:22.900 ============================================================================== 00:08:22.900 Range in us Cumulative IO count 00:08:22.900 3780.923 - 3806.129: 0.0058% ( 1) 00:08:22.900 3806.129 - 3831.335: 0.0405% ( 6) 00:08:22.900 3831.335 - 3856.542: 0.0926% ( 9) 00:08:22.900 3856.542 - 3881.748: 0.1447% ( 9) 00:08:22.900 3881.748 - 3906.954: 0.2199% ( 13) 00:08:22.900 3906.954 - 3932.160: 0.2836% ( 11) 00:08:22.900 3932.160 - 3957.366: 0.3009% ( 3) 00:08:22.900 3957.366 - 3982.572: 0.3125% ( 2) 00:08:22.900 3982.572 - 4007.778: 0.3183% ( 1) 00:08:22.900 4007.778 - 4032.985: 0.3356% ( 3) 00:08:22.900 4032.985 - 4058.191: 0.3414% ( 1) 00:08:22.900 4058.191 - 4083.397: 0.3588% ( 3) 00:08:22.900 4083.397 - 4108.603: 0.3704% ( 2) 00:08:22.900 5570.560 - 5595.766: 0.3762% ( 1) 00:08:22.900 5696.591 - 5721.797: 0.3819% ( 1) 00:08:22.900 5721.797 - 5747.003: 0.3877% ( 1) 00:08:22.900 5747.003 - 5772.209: 0.4051% ( 3) 00:08:22.900 5772.209 - 5797.415: 0.4167% ( 2) 00:08:22.900 5797.415 - 5822.622: 0.4282% ( 2) 00:08:22.900 5822.622 - 5847.828: 0.4572% ( 5) 00:08:22.900 5847.828 - 5873.034: 0.5150% ( 10) 00:08:22.900 5873.034 - 5898.240: 0.5671% ( 9) 00:08:22.900 5898.240 - 5923.446: 0.6481% ( 14) 00:08:22.900 5923.446 - 5948.652: 0.8218% ( 30) 00:08:22.900 5948.652 - 5973.858: 0.9838% ( 28) 00:08:22.900 5973.858 - 5999.065: 1.0706% ( 15) 00:08:22.900 5999.065 - 6024.271: 1.1169% ( 8) 00:08:22.900 6024.271 - 6049.477: 1.1748% ( 10) 00:08:22.900 6049.477 - 6074.683: 1.2384% ( 11) 00:08:22.900 6074.683 - 6099.889: 1.3657% ( 22) 00:08:22.900 6099.889 - 6125.095: 1.6551% ( 50) 00:08:22.900 6125.095 - 6150.302: 1.8692% ( 37) 00:08:22.900 6150.302 - 6175.508: 2.1470% ( 48) 00:08:22.900 6175.508 - 6200.714: 2.5231% ( 65) 00:08:22.900 6200.714 - 6225.920: 2.9977% ( 82) 00:08:22.900 6225.920 - 6251.126: 4.1898% ( 206) 00:08:22.900 6251.126 - 6276.332: 5.4919% ( 225) 00:08:22.900 6276.332 - 6301.538: 6.5683% ( 186) 00:08:22.900 6301.538 - 6326.745: 7.7836% ( 210) 00:08:22.900 6326.745 - 6351.951: 9.5486% ( 305) 00:08:22.900 6351.951 - 6377.157: 11.2153% ( 288) 00:08:22.900 6377.157 - 6402.363: 12.7951% ( 273) 00:08:22.900 6402.363 - 6427.569: 14.4097% ( 279) 00:08:22.900 6427.569 - 6452.775: 15.7812% ( 237) 00:08:22.900 6452.775 - 6503.188: 18.5012% ( 470) 00:08:22.900 6503.188 - 6553.600: 20.9722% ( 427) 00:08:22.900 6553.600 - 6604.012: 23.1944% ( 384) 00:08:22.900 6604.012 - 6654.425: 25.8796% ( 464) 00:08:22.900 6654.425 - 6704.837: 28.8542% ( 514) 00:08:22.900 6704.837 - 6755.249: 31.9676% ( 538) 00:08:22.900 6755.249 - 6805.662: 35.6308% ( 633) 00:08:22.900 6805.662 - 6856.074: 39.2130% ( 619) 00:08:22.900 6856.074 - 6906.486: 42.8009% ( 620) 00:08:22.900 6906.486 - 6956.898: 46.5104% ( 641) 00:08:22.900 6956.898 - 7007.311: 52.0718% ( 961) 00:08:22.900 7007.311 - 7057.723: 56.7072% ( 801) 00:08:22.900 7057.723 - 7108.135: 61.2269% ( 781) 00:08:22.900 7108.135 - 7158.548: 65.2662% ( 698) 00:08:22.900 7158.548 - 7208.960: 68.4375% ( 548) 00:08:22.900 7208.960 - 7259.372: 70.6655% ( 385) 00:08:22.900 7259.372 - 7309.785: 73.0440% ( 411) 00:08:22.900 7309.785 - 7360.197: 74.8553% ( 313) 00:08:22.900 7360.197 - 7410.609: 76.0359% ( 204) 00:08:22.900 7410.609 - 7461.022: 77.0833% ( 181) 00:08:22.900 7461.022 - 7511.434: 78.2986% ( 210) 00:08:22.900 7511.434 - 7561.846: 79.2650% ( 167) 00:08:22.900 7561.846 - 7612.258: 80.2778% ( 175) 00:08:22.900 7612.258 - 7662.671: 81.2500% ( 168) 00:08:22.900 7662.671 - 7713.083: 81.9271% ( 117) 00:08:22.900 7713.083 - 7763.495: 82.9861% ( 183) 00:08:22.900 7763.495 - 7813.908: 83.5648% ( 100) 00:08:22.900 7813.908 - 7864.320: 84.4155% ( 147) 00:08:22.900 7864.320 - 7914.732: 85.1736% ( 131) 00:08:22.900 7914.732 - 7965.145: 86.1053% ( 161) 00:08:22.900 7965.145 - 8015.557: 86.7535% ( 112) 00:08:22.900 8015.557 - 8065.969: 87.0428% ( 50) 00:08:22.900 8065.969 - 8116.382: 87.2569% ( 37) 00:08:22.900 8116.382 - 8166.794: 87.4884% ( 40) 00:08:22.900 8166.794 - 8217.206: 87.6968% ( 36) 00:08:22.900 8217.206 - 8267.618: 87.8125% ( 20) 00:08:22.900 8267.618 - 8318.031: 87.9919% ( 31) 00:08:22.900 8318.031 - 8368.443: 88.2812% ( 50) 00:08:22.900 8368.443 - 8418.855: 88.3970% ( 20) 00:08:22.900 8418.855 - 8469.268: 88.4664% ( 12) 00:08:22.900 8469.268 - 8519.680: 88.5822% ( 20) 00:08:22.900 8519.680 - 8570.092: 88.7384% ( 27) 00:08:22.900 8570.092 - 8620.505: 88.8715% ( 23) 00:08:22.900 8620.505 - 8670.917: 88.9525% ( 14) 00:08:22.900 8670.917 - 8721.329: 89.0741% ( 21) 00:08:22.900 8721.329 - 8771.742: 89.2303% ( 27) 00:08:22.900 8771.742 - 8822.154: 89.3924% ( 28) 00:08:22.900 8822.154 - 8872.566: 89.6123% ( 38) 00:08:22.900 8872.566 - 8922.978: 89.8495% ( 41) 00:08:22.900 8922.978 - 8973.391: 89.9363% ( 15) 00:08:22.900 8973.391 - 9023.803: 90.0000% ( 11) 00:08:22.900 9023.803 - 9074.215: 90.0984% ( 17) 00:08:22.900 9074.215 - 9124.628: 90.2025% ( 18) 00:08:22.900 9124.628 - 9175.040: 90.3588% ( 27) 00:08:22.900 9175.040 - 9225.452: 90.5266% ( 29) 00:08:22.900 9225.452 - 9275.865: 90.8854% ( 62) 00:08:22.900 9275.865 - 9326.277: 91.2037% ( 55) 00:08:22.900 9326.277 - 9376.689: 91.4294% ( 39) 00:08:22.900 9376.689 - 9427.102: 91.6435% ( 37) 00:08:22.900 9427.102 - 9477.514: 91.9387% ( 51) 00:08:22.900 9477.514 - 9527.926: 92.0833% ( 25) 00:08:22.900 9527.926 - 9578.338: 92.2164% ( 23) 00:08:22.900 9578.338 - 9628.751: 92.4074% ( 33) 00:08:22.900 9628.751 - 9679.163: 92.5810% ( 30) 00:08:22.901 9679.163 - 9729.575: 92.7431% ( 28) 00:08:22.901 9729.575 - 9779.988: 92.8877% ( 25) 00:08:22.901 9779.988 - 9830.400: 93.0150% ( 22) 00:08:22.901 9830.400 - 9880.812: 93.1250% ( 19) 00:08:22.901 9880.812 - 9931.225: 93.1944% ( 12) 00:08:22.901 9931.225 - 9981.637: 93.3160% ( 21) 00:08:22.901 9981.637 - 10032.049: 93.4086% ( 16) 00:08:22.901 10032.049 - 10082.462: 93.5012% ( 16) 00:08:22.901 10082.462 - 10132.874: 93.6285% ( 22) 00:08:22.901 10132.874 - 10183.286: 93.7500% ( 21) 00:08:22.901 10183.286 - 10233.698: 93.8542% ( 18) 00:08:22.901 10233.698 - 10284.111: 93.9525% ( 17) 00:08:22.901 10284.111 - 10334.523: 94.0625% ( 19) 00:08:22.901 10334.523 - 10384.935: 94.1667% ( 18) 00:08:22.901 10384.935 - 10435.348: 94.3171% ( 26) 00:08:22.901 10435.348 - 10485.760: 94.6412% ( 56) 00:08:22.901 10485.760 - 10536.172: 94.8322% ( 33) 00:08:22.901 10536.172 - 10586.585: 94.9826% ( 26) 00:08:22.901 10586.585 - 10636.997: 95.1273% ( 25) 00:08:22.901 10636.997 - 10687.409: 95.2720% ( 25) 00:08:22.901 10687.409 - 10737.822: 95.4456% ( 30) 00:08:22.901 10737.822 - 10788.234: 95.6539% ( 36) 00:08:22.901 10788.234 - 10838.646: 95.8391% ( 32) 00:08:22.901 10838.646 - 10889.058: 96.0301% ( 33) 00:08:22.901 10889.058 - 10939.471: 96.2674% ( 41) 00:08:22.901 10939.471 - 10989.883: 96.4410% ( 30) 00:08:22.901 10989.883 - 11040.295: 96.5220% ( 14) 00:08:22.901 11040.295 - 11090.708: 96.5972% ( 13) 00:08:22.901 11090.708 - 11141.120: 96.6551% ( 10) 00:08:22.901 11141.120 - 11191.532: 96.7014% ( 8) 00:08:22.901 11191.532 - 11241.945: 96.7303% ( 5) 00:08:22.901 11241.945 - 11292.357: 96.7650% ( 6) 00:08:22.901 11292.357 - 11342.769: 96.8171% ( 9) 00:08:22.901 11342.769 - 11393.182: 96.9560% ( 24) 00:08:22.901 11393.182 - 11443.594: 96.9792% ( 4) 00:08:22.901 11443.594 - 11494.006: 96.9965% ( 3) 00:08:22.901 11494.006 - 11544.418: 97.0081% ( 2) 00:08:22.901 11544.418 - 11594.831: 97.0255% ( 3) 00:08:22.901 11594.831 - 11645.243: 97.0370% ( 2) 00:08:22.901 11645.243 - 11695.655: 97.0428% ( 1) 00:08:22.901 11846.892 - 11897.305: 97.0660% ( 4) 00:08:22.901 11897.305 - 11947.717: 97.0891% ( 4) 00:08:22.901 11947.717 - 11998.129: 97.1181% ( 5) 00:08:22.901 11998.129 - 12048.542: 97.2106% ( 16) 00:08:22.901 12048.542 - 12098.954: 97.3322% ( 21) 00:08:22.901 12098.954 - 12149.366: 97.5405% ( 36) 00:08:22.901 12149.366 - 12199.778: 97.5984% ( 10) 00:08:22.901 12199.778 - 12250.191: 97.6505% ( 9) 00:08:22.901 12250.191 - 12300.603: 97.6968% ( 8) 00:08:22.901 12300.603 - 12351.015: 97.7373% ( 7) 00:08:22.901 12351.015 - 12401.428: 97.7778% ( 7) 00:08:22.901 12401.428 - 12451.840: 97.8125% ( 6) 00:08:22.901 12451.840 - 12502.252: 97.8472% ( 6) 00:08:22.901 12502.252 - 12552.665: 97.8877% ( 7) 00:08:22.901 12552.665 - 12603.077: 97.9225% ( 6) 00:08:22.901 12603.077 - 12653.489: 97.9572% ( 6) 00:08:22.901 12653.489 - 12703.902: 98.0382% ( 14) 00:08:22.901 12703.902 - 12754.314: 98.1134% ( 13) 00:08:22.901 12754.314 - 12804.726: 98.2639% ( 26) 00:08:22.901 12804.726 - 12855.138: 98.4606% ( 34) 00:08:22.901 12855.138 - 12905.551: 98.5301% ( 12) 00:08:22.901 12905.551 - 13006.375: 98.6169% ( 15) 00:08:22.901 13006.375 - 13107.200: 98.6806% ( 11) 00:08:22.901 13107.200 - 13208.025: 98.8021% ( 21) 00:08:22.901 13208.025 - 13308.849: 98.8368% ( 6) 00:08:22.901 13308.849 - 13409.674: 98.8773% ( 7) 00:08:22.901 13409.674 - 13510.498: 98.8889% ( 2) 00:08:22.901 14115.446 - 14216.271: 98.9294% ( 7) 00:08:22.901 14216.271 - 14317.095: 99.1725% ( 42) 00:08:22.901 14317.095 - 14417.920: 99.2188% ( 8) 00:08:22.901 14417.920 - 14518.745: 99.2593% ( 7) 00:08:22.901 14720.394 - 14821.218: 99.2708% ( 2) 00:08:22.901 14821.218 - 14922.043: 99.2998% ( 5) 00:08:22.901 14922.043 - 15022.868: 99.3287% ( 5) 00:08:22.901 15022.868 - 15123.692: 99.3576% ( 5) 00:08:22.901 15123.692 - 15224.517: 99.3866% ( 5) 00:08:22.901 15224.517 - 15325.342: 99.4213% ( 6) 00:08:22.901 15325.342 - 15426.166: 99.4502% ( 5) 00:08:22.901 15426.166 - 15526.991: 99.5370% ( 15) 00:08:22.901 15526.991 - 15627.815: 99.5602% ( 4) 00:08:22.901 15627.815 - 15728.640: 99.5833% ( 4) 00:08:22.901 15728.640 - 15829.465: 99.6065% ( 4) 00:08:22.901 15829.465 - 15930.289: 99.6296% ( 4) 00:08:22.901 20064.098 - 20164.923: 99.6528% ( 4) 00:08:22.901 20164.923 - 20265.748: 99.6817% ( 5) 00:08:22.901 20265.748 - 20366.572: 99.7106% ( 5) 00:08:22.901 20366.572 - 20467.397: 99.8264% ( 20) 00:08:22.901 20467.397 - 20568.222: 99.8495% ( 4) 00:08:22.901 20568.222 - 20669.046: 99.8785% ( 5) 00:08:22.901 20669.046 - 20769.871: 99.9016% ( 4) 00:08:22.901 20769.871 - 20870.695: 99.9306% ( 5) 00:08:22.901 20870.695 - 20971.520: 99.9537% ( 4) 00:08:22.901 20971.520 - 21072.345: 99.9826% ( 5) 00:08:22.901 21072.345 - 21173.169: 100.0000% ( 3) 00:08:22.901 00:08:22.901 05:10:15 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:08:22.901 00:08:22.901 real 0m2.428s 00:08:22.901 user 0m2.141s 00:08:22.901 sys 0m0.191s 00:08:22.901 05:10:15 nvme.nvme_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:22.901 05:10:15 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:08:22.901 ************************************ 00:08:22.901 END TEST nvme_perf 00:08:22.901 ************************************ 00:08:22.901 05:10:15 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:22.901 05:10:15 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:22.901 05:10:15 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:22.901 05:10:15 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:22.901 ************************************ 00:08:22.901 START TEST nvme_hello_world 00:08:22.901 ************************************ 00:08:22.901 05:10:15 nvme.nvme_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:22.901 Initializing NVMe Controllers 00:08:22.901 Attached to 0000:00:10.0 00:08:22.901 Namespace ID: 1 size: 6GB 00:08:22.901 Attached to 0000:00:11.0 00:08:22.901 Namespace ID: 1 size: 5GB 00:08:22.901 Attached to 0000:00:13.0 00:08:22.901 Namespace ID: 1 size: 1GB 00:08:22.901 Attached to 0000:00:12.0 00:08:22.901 Namespace ID: 1 size: 4GB 00:08:22.901 Namespace ID: 2 size: 4GB 00:08:22.901 Namespace ID: 3 size: 4GB 00:08:22.901 Initialization complete. 00:08:22.901 INFO: using host memory buffer for IO 00:08:22.901 Hello world! 00:08:22.901 INFO: using host memory buffer for IO 00:08:22.901 Hello world! 00:08:22.901 INFO: using host memory buffer for IO 00:08:22.901 Hello world! 00:08:22.901 INFO: using host memory buffer for IO 00:08:22.901 Hello world! 00:08:22.901 INFO: using host memory buffer for IO 00:08:22.901 Hello world! 00:08:22.901 INFO: using host memory buffer for IO 00:08:22.901 Hello world! 00:08:22.901 00:08:22.901 real 0m0.179s 00:08:22.901 user 0m0.063s 00:08:22.901 sys 0m0.079s 00:08:22.901 05:10:16 nvme.nvme_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:22.901 05:10:16 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:22.901 ************************************ 00:08:22.901 END TEST nvme_hello_world 00:08:22.901 ************************************ 00:08:23.159 05:10:16 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:23.159 05:10:16 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:23.159 05:10:16 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:23.159 05:10:16 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:23.159 ************************************ 00:08:23.159 START TEST nvme_sgl 00:08:23.159 ************************************ 00:08:23.159 05:10:16 nvme.nvme_sgl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:23.159 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:08:23.159 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:08:23.159 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:08:23.159 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:08:23.159 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:08:23.159 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:08:23.159 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:08:23.159 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:08:23.159 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:08:23.159 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:08:23.159 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:08:23.159 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:08:23.159 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:08:23.159 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:08:23.159 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:08:23.159 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:08:23.159 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:08:23.159 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:08:23.159 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:08:23.159 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:08:23.159 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:08:23.159 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:08:23.159 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:08:23.159 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:08:23.159 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:08:23.159 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:08:23.159 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:08:23.159 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:08:23.159 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:08:23.159 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:08:23.159 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:08:23.159 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:08:23.159 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:08:23.159 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:08:23.159 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:08:23.159 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:08:23.159 NVMe Readv/Writev Request test 00:08:23.159 Attached to 0000:00:10.0 00:08:23.159 Attached to 0000:00:11.0 00:08:23.160 Attached to 0000:00:13.0 00:08:23.160 Attached to 0000:00:12.0 00:08:23.160 0000:00:10.0: build_io_request_2 test passed 00:08:23.160 0000:00:10.0: build_io_request_4 test passed 00:08:23.160 0000:00:10.0: build_io_request_5 test passed 00:08:23.160 0000:00:10.0: build_io_request_6 test passed 00:08:23.160 0000:00:10.0: build_io_request_7 test passed 00:08:23.160 0000:00:10.0: build_io_request_10 test passed 00:08:23.160 0000:00:11.0: build_io_request_2 test passed 00:08:23.160 0000:00:11.0: build_io_request_4 test passed 00:08:23.160 0000:00:11.0: build_io_request_5 test passed 00:08:23.160 0000:00:11.0: build_io_request_6 test passed 00:08:23.160 0000:00:11.0: build_io_request_7 test passed 00:08:23.160 0000:00:11.0: build_io_request_10 test passed 00:08:23.160 Cleaning up... 00:08:23.160 00:08:23.160 real 0m0.241s 00:08:23.160 user 0m0.115s 00:08:23.160 sys 0m0.078s 00:08:23.160 05:10:16 nvme.nvme_sgl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:23.160 05:10:16 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:08:23.160 ************************************ 00:08:23.160 END TEST nvme_sgl 00:08:23.160 ************************************ 00:08:23.418 05:10:16 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:23.418 05:10:16 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:23.418 05:10:16 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:23.418 05:10:16 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:23.418 ************************************ 00:08:23.418 START TEST nvme_e2edp 00:08:23.418 ************************************ 00:08:23.418 05:10:16 nvme.nvme_e2edp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:23.418 NVMe Write/Read with End-to-End data protection test 00:08:23.418 Attached to 0000:00:10.0 00:08:23.418 Attached to 0000:00:11.0 00:08:23.418 Attached to 0000:00:13.0 00:08:23.418 Attached to 0000:00:12.0 00:08:23.418 Cleaning up... 00:08:23.418 00:08:23.418 real 0m0.178s 00:08:23.418 user 0m0.051s 00:08:23.418 sys 0m0.084s 00:08:23.418 05:10:16 nvme.nvme_e2edp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:23.418 05:10:16 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:08:23.418 ************************************ 00:08:23.418 END TEST nvme_e2edp 00:08:23.418 ************************************ 00:08:23.418 05:10:16 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:23.418 05:10:16 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:23.418 05:10:16 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:23.418 05:10:16 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:23.418 ************************************ 00:08:23.418 START TEST nvme_reserve 00:08:23.418 ************************************ 00:08:23.418 05:10:16 nvme.nvme_reserve -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:23.676 ===================================================== 00:08:23.676 NVMe Controller at PCI bus 0, device 16, function 0 00:08:23.676 ===================================================== 00:08:23.676 Reservations: Not Supported 00:08:23.676 ===================================================== 00:08:23.676 NVMe Controller at PCI bus 0, device 17, function 0 00:08:23.676 ===================================================== 00:08:23.676 Reservations: Not Supported 00:08:23.676 ===================================================== 00:08:23.676 NVMe Controller at PCI bus 0, device 19, function 0 00:08:23.676 ===================================================== 00:08:23.676 Reservations: Not Supported 00:08:23.676 ===================================================== 00:08:23.676 NVMe Controller at PCI bus 0, device 18, function 0 00:08:23.676 ===================================================== 00:08:23.676 Reservations: Not Supported 00:08:23.676 Reservation test passed 00:08:23.676 ************************************ 00:08:23.676 END TEST nvme_reserve 00:08:23.676 ************************************ 00:08:23.676 00:08:23.676 real 0m0.178s 00:08:23.676 user 0m0.052s 00:08:23.676 sys 0m0.082s 00:08:23.676 05:10:16 nvme.nvme_reserve -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:23.676 05:10:16 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:08:23.676 05:10:16 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:23.676 05:10:16 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:23.676 05:10:16 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:23.676 05:10:16 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:23.676 ************************************ 00:08:23.676 START TEST nvme_err_injection 00:08:23.676 ************************************ 00:08:23.676 05:10:16 nvme.nvme_err_injection -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:23.934 NVMe Error Injection test 00:08:23.934 Attached to 0000:00:10.0 00:08:23.934 Attached to 0000:00:11.0 00:08:23.934 Attached to 0000:00:13.0 00:08:23.934 Attached to 0000:00:12.0 00:08:23.934 0000:00:10.0: get features failed as expected 00:08:23.934 0000:00:11.0: get features failed as expected 00:08:23.934 0000:00:13.0: get features failed as expected 00:08:23.934 0000:00:12.0: get features failed as expected 00:08:23.934 0000:00:10.0: get features successfully as expected 00:08:23.934 0000:00:11.0: get features successfully as expected 00:08:23.934 0000:00:13.0: get features successfully as expected 00:08:23.934 0000:00:12.0: get features successfully as expected 00:08:23.934 0000:00:10.0: read failed as expected 00:08:23.934 0000:00:11.0: read failed as expected 00:08:23.934 0000:00:13.0: read failed as expected 00:08:23.934 0000:00:12.0: read failed as expected 00:08:23.934 0000:00:11.0: read successfully as expected 00:08:23.934 0000:00:10.0: read successfully as expected 00:08:23.934 0000:00:13.0: read successfully as expected 00:08:23.934 0000:00:12.0: read successfully as expected 00:08:23.934 Cleaning up... 00:08:23.934 00:08:23.934 real 0m0.189s 00:08:23.934 user 0m0.069s 00:08:23.934 sys 0m0.082s 00:08:23.934 ************************************ 00:08:23.934 END TEST nvme_err_injection 00:08:23.934 ************************************ 00:08:23.934 05:10:17 nvme.nvme_err_injection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:23.934 05:10:17 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:08:23.934 05:10:17 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:23.934 05:10:17 nvme -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:08:23.934 05:10:17 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:23.934 05:10:17 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:23.934 ************************************ 00:08:23.934 START TEST nvme_overhead 00:08:23.934 ************************************ 00:08:23.934 05:10:17 nvme.nvme_overhead -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:25.307 Initializing NVMe Controllers 00:08:25.307 Attached to 0000:00:10.0 00:08:25.307 Attached to 0000:00:11.0 00:08:25.307 Attached to 0000:00:13.0 00:08:25.307 Attached to 0000:00:12.0 00:08:25.307 Initialization complete. Launching workers. 00:08:25.307 submit (in ns) avg, min, max = 11343.6, 10194.6, 322153.1 00:08:25.307 complete (in ns) avg, min, max = 7437.6, 7084.6, 67906.2 00:08:25.307 00:08:25.307 Submit histogram 00:08:25.307 ================ 00:08:25.307 Range in us Cumulative Count 00:08:25.307 10.191 - 10.240: 0.0058% ( 1) 00:08:25.307 10.338 - 10.388: 0.0116% ( 1) 00:08:25.307 10.437 - 10.486: 0.0174% ( 1) 00:08:25.307 10.732 - 10.782: 0.0871% ( 12) 00:08:25.307 10.782 - 10.831: 0.5225% ( 75) 00:08:25.307 10.831 - 10.880: 3.1118% ( 446) 00:08:25.307 10.880 - 10.929: 10.3745% ( 1251) 00:08:25.307 10.929 - 10.978: 22.5544% ( 2098) 00:08:25.307 10.978 - 11.028: 37.1379% ( 2512) 00:08:25.307 11.028 - 11.077: 50.9376% ( 2377) 00:08:25.307 11.077 - 11.126: 62.6821% ( 2023) 00:08:25.307 11.126 - 11.175: 71.5123% ( 1521) 00:08:25.307 11.175 - 11.225: 77.4862% ( 1029) 00:08:25.307 11.225 - 11.274: 81.1495% ( 631) 00:08:25.307 11.274 - 11.323: 83.3846% ( 385) 00:08:25.307 11.323 - 11.372: 84.9231% ( 265) 00:08:25.307 11.372 - 11.422: 85.8171% ( 154) 00:08:25.307 11.422 - 11.471: 86.4615% ( 111) 00:08:25.307 11.471 - 11.520: 87.0711% ( 105) 00:08:25.307 11.520 - 11.569: 87.5007% ( 74) 00:08:25.307 11.569 - 11.618: 87.8781% ( 65) 00:08:25.307 11.618 - 11.668: 88.3309% ( 78) 00:08:25.307 11.668 - 11.717: 88.6909% ( 62) 00:08:25.307 11.717 - 11.766: 89.1030% ( 71) 00:08:25.307 11.766 - 11.815: 89.5443% ( 76) 00:08:25.307 11.815 - 11.865: 90.0319% ( 84) 00:08:25.307 11.865 - 11.914: 90.5718% ( 93) 00:08:25.307 11.914 - 11.963: 91.2975% ( 125) 00:08:25.307 11.963 - 12.012: 92.0929% ( 137) 00:08:25.307 12.012 - 12.062: 92.7663% ( 116) 00:08:25.307 12.062 - 12.111: 93.4398% ( 116) 00:08:25.307 12.111 - 12.160: 94.0610% ( 107) 00:08:25.307 12.160 - 12.209: 94.5544% ( 85) 00:08:25.307 12.209 - 12.258: 95.0537% ( 86) 00:08:25.307 12.258 - 12.308: 95.3846% ( 57) 00:08:25.307 12.308 - 12.357: 95.5994% ( 37) 00:08:25.307 12.357 - 12.406: 95.7794% ( 31) 00:08:25.307 12.406 - 12.455: 95.9129% ( 23) 00:08:25.307 12.455 - 12.505: 96.0348% ( 21) 00:08:25.307 12.505 - 12.554: 96.1161% ( 14) 00:08:25.307 12.554 - 12.603: 96.1858% ( 12) 00:08:25.307 12.603 - 12.702: 96.2787% ( 16) 00:08:25.307 12.702 - 12.800: 96.3367% ( 10) 00:08:25.307 12.800 - 12.898: 96.3890% ( 9) 00:08:25.307 12.898 - 12.997: 96.4819% ( 16) 00:08:25.307 12.997 - 13.095: 96.6270% ( 25) 00:08:25.307 13.095 - 13.194: 96.8418% ( 37) 00:08:25.307 13.194 - 13.292: 97.0624% ( 38) 00:08:25.307 13.292 - 13.391: 97.2134% ( 26) 00:08:25.307 13.391 - 13.489: 97.3643% ( 26) 00:08:25.307 13.489 - 13.588: 97.5152% ( 26) 00:08:25.307 13.588 - 13.686: 97.5965% ( 14) 00:08:25.307 13.686 - 13.785: 97.6488% ( 9) 00:08:25.307 13.785 - 13.883: 97.7068% ( 10) 00:08:25.307 13.883 - 13.982: 97.7475% ( 7) 00:08:25.307 13.982 - 14.080: 97.8113% ( 11) 00:08:25.307 14.080 - 14.178: 97.8287% ( 3) 00:08:25.307 14.277 - 14.375: 97.8578% ( 5) 00:08:25.307 14.375 - 14.474: 97.8868% ( 5) 00:08:25.307 14.474 - 14.572: 97.9042% ( 3) 00:08:25.307 14.572 - 14.671: 97.9565% ( 9) 00:08:25.307 14.671 - 14.769: 98.0145% ( 10) 00:08:25.307 14.769 - 14.868: 98.0552% ( 7) 00:08:25.307 14.868 - 14.966: 98.1132% ( 10) 00:08:25.307 14.966 - 15.065: 98.1713% ( 10) 00:08:25.307 15.065 - 15.163: 98.2003% ( 5) 00:08:25.307 15.163 - 15.262: 98.2409% ( 7) 00:08:25.307 15.262 - 15.360: 98.2583% ( 3) 00:08:25.307 15.360 - 15.458: 98.2816% ( 4) 00:08:25.307 15.458 - 15.557: 98.2990% ( 3) 00:08:25.308 15.557 - 15.655: 98.3338% ( 6) 00:08:25.308 15.754 - 15.852: 98.3454% ( 2) 00:08:25.308 15.852 - 15.951: 98.3512% ( 1) 00:08:25.308 16.049 - 16.148: 98.3628% ( 2) 00:08:25.308 16.148 - 16.246: 98.3803% ( 3) 00:08:25.308 16.246 - 16.345: 98.3919% ( 2) 00:08:25.308 16.345 - 16.443: 98.4093% ( 3) 00:08:25.308 16.443 - 16.542: 98.4441% ( 6) 00:08:25.308 16.542 - 16.640: 98.5544% ( 19) 00:08:25.308 16.640 - 16.738: 98.6589% ( 18) 00:08:25.308 16.738 - 16.837: 98.7576% ( 17) 00:08:25.308 16.837 - 16.935: 98.8099% ( 9) 00:08:25.308 16.935 - 17.034: 98.9202% ( 19) 00:08:25.308 17.034 - 17.132: 98.9724% ( 9) 00:08:25.308 17.132 - 17.231: 99.0827% ( 19) 00:08:25.308 17.231 - 17.329: 99.1350% ( 9) 00:08:25.308 17.329 - 17.428: 99.1988% ( 11) 00:08:25.308 17.428 - 17.526: 99.2627% ( 11) 00:08:25.308 17.526 - 17.625: 99.3324% ( 12) 00:08:25.308 17.625 - 17.723: 99.3672% ( 6) 00:08:25.308 17.723 - 17.822: 99.4136% ( 8) 00:08:25.308 17.822 - 17.920: 99.4543% ( 7) 00:08:25.308 17.920 - 18.018: 99.4891% ( 6) 00:08:25.308 18.018 - 18.117: 99.5181% ( 5) 00:08:25.308 18.117 - 18.215: 99.5646% ( 8) 00:08:25.308 18.215 - 18.314: 99.5994% ( 6) 00:08:25.308 18.314 - 18.412: 99.6459% ( 8) 00:08:25.308 18.412 - 18.511: 99.6691% ( 4) 00:08:25.308 18.511 - 18.609: 99.6923% ( 4) 00:08:25.308 18.609 - 18.708: 99.7039% ( 2) 00:08:25.308 18.708 - 18.806: 99.7097% ( 1) 00:08:25.308 18.806 - 18.905: 99.7155% ( 1) 00:08:25.308 19.003 - 19.102: 99.7213% ( 1) 00:08:25.308 19.102 - 19.200: 99.7271% ( 1) 00:08:25.308 19.200 - 19.298: 99.7329% ( 1) 00:08:25.308 19.298 - 19.397: 99.7504% ( 3) 00:08:25.308 19.397 - 19.495: 99.7736% ( 4) 00:08:25.308 19.495 - 19.594: 99.7794% ( 1) 00:08:25.308 19.889 - 19.988: 99.7852% ( 1) 00:08:25.308 19.988 - 20.086: 99.7910% ( 1) 00:08:25.308 20.086 - 20.185: 99.8084% ( 3) 00:08:25.308 20.283 - 20.382: 99.8200% ( 2) 00:08:25.308 20.480 - 20.578: 99.8258% ( 1) 00:08:25.308 20.578 - 20.677: 99.8316% ( 1) 00:08:25.308 20.677 - 20.775: 99.8374% ( 1) 00:08:25.308 20.874 - 20.972: 99.8491% ( 2) 00:08:25.308 20.972 - 21.071: 99.8549% ( 1) 00:08:25.308 21.169 - 21.268: 99.8665% ( 2) 00:08:25.308 21.366 - 21.465: 99.8723% ( 1) 00:08:25.308 21.465 - 21.563: 99.8839% ( 2) 00:08:25.308 21.563 - 21.662: 99.9013% ( 3) 00:08:25.308 22.055 - 22.154: 99.9071% ( 1) 00:08:25.308 22.252 - 22.351: 99.9129% ( 1) 00:08:25.308 22.449 - 22.548: 99.9187% ( 1) 00:08:25.308 22.843 - 22.942: 99.9245% ( 1) 00:08:25.308 23.040 - 23.138: 99.9303% ( 1) 00:08:25.308 23.729 - 23.828: 99.9361% ( 1) 00:08:25.308 23.926 - 24.025: 99.9419% ( 1) 00:08:25.308 24.517 - 24.615: 99.9478% ( 1) 00:08:25.308 24.615 - 24.714: 99.9536% ( 1) 00:08:25.308 24.714 - 24.812: 99.9594% ( 1) 00:08:25.308 29.538 - 29.735: 99.9652% ( 1) 00:08:25.308 32.098 - 32.295: 99.9710% ( 1) 00:08:25.308 41.157 - 41.354: 99.9768% ( 1) 00:08:25.308 41.551 - 41.748: 99.9826% ( 1) 00:08:25.308 44.505 - 44.702: 99.9884% ( 1) 00:08:25.308 51.594 - 51.988: 99.9942% ( 1) 00:08:25.308 321.378 - 322.954: 100.0000% ( 1) 00:08:25.308 00:08:25.308 Complete histogram 00:08:25.308 ================== 00:08:25.308 Range in us Cumulative Count 00:08:25.308 7.040 - 7.089: 0.0058% ( 1) 00:08:25.308 7.089 - 7.138: 0.1161% ( 19) 00:08:25.308 7.138 - 7.188: 2.5196% ( 414) 00:08:25.308 7.188 - 7.237: 15.6343% ( 2259) 00:08:25.308 7.237 - 7.286: 40.7257% ( 4322) 00:08:25.308 7.286 - 7.335: 65.5269% ( 4272) 00:08:25.308 7.335 - 7.385: 80.9637% ( 2659) 00:08:25.308 7.385 - 7.434: 88.9927% ( 1383) 00:08:25.308 7.434 - 7.483: 92.6038% ( 622) 00:08:25.308 7.483 - 7.532: 94.4906% ( 325) 00:08:25.308 7.532 - 7.582: 95.5530% ( 183) 00:08:25.308 7.582 - 7.631: 96.1103% ( 96) 00:08:25.308 7.631 - 7.680: 96.3948% ( 49) 00:08:25.308 7.680 - 7.729: 96.5225% ( 22) 00:08:25.308 7.729 - 7.778: 96.6328% ( 19) 00:08:25.308 7.778 - 7.828: 96.7199% ( 15) 00:08:25.308 7.828 - 7.877: 96.7896% ( 12) 00:08:25.308 7.877 - 7.926: 96.8824% ( 16) 00:08:25.308 7.926 - 7.975: 97.0276% ( 25) 00:08:25.308 7.975 - 8.025: 97.2250% ( 34) 00:08:25.308 8.025 - 8.074: 97.3933% ( 29) 00:08:25.308 8.074 - 8.123: 97.6255% ( 40) 00:08:25.308 8.123 - 8.172: 97.8578% ( 40) 00:08:25.308 8.172 - 8.222: 98.1306% ( 47) 00:08:25.308 8.222 - 8.271: 98.2177% ( 15) 00:08:25.308 8.271 - 8.320: 98.2990% ( 14) 00:08:25.308 8.320 - 8.369: 98.3280% ( 5) 00:08:25.308 8.369 - 8.418: 98.3396% ( 2) 00:08:25.308 8.517 - 8.566: 98.3628% ( 4) 00:08:25.308 8.566 - 8.615: 98.3687% ( 1) 00:08:25.308 8.615 - 8.665: 98.3803% ( 2) 00:08:25.308 8.665 - 8.714: 98.3861% ( 1) 00:08:25.308 8.714 - 8.763: 98.3919% ( 1) 00:08:25.308 8.862 - 8.911: 98.3977% ( 1) 00:08:25.308 9.009 - 9.058: 98.4035% ( 1) 00:08:25.308 9.058 - 9.108: 98.4151% ( 2) 00:08:25.308 9.206 - 9.255: 98.4209% ( 1) 00:08:25.308 9.255 - 9.305: 98.4267% ( 1) 00:08:25.308 9.403 - 9.452: 98.4383% ( 2) 00:08:25.308 9.452 - 9.502: 98.4499% ( 2) 00:08:25.308 9.600 - 9.649: 98.4557% ( 1) 00:08:25.308 9.649 - 9.698: 98.4731% ( 3) 00:08:25.308 9.698 - 9.748: 98.4848% ( 2) 00:08:25.308 9.748 - 9.797: 98.4906% ( 1) 00:08:25.308 9.846 - 9.895: 98.4964% ( 1) 00:08:25.308 9.895 - 9.945: 98.5080% ( 2) 00:08:25.308 9.945 - 9.994: 98.5138% ( 1) 00:08:25.308 10.043 - 10.092: 98.5254% ( 2) 00:08:25.308 10.092 - 10.142: 98.5428% ( 3) 00:08:25.308 10.142 - 10.191: 98.5486% ( 1) 00:08:25.308 10.191 - 10.240: 98.5776% ( 5) 00:08:25.308 10.240 - 10.289: 98.6067% ( 5) 00:08:25.308 10.289 - 10.338: 98.6125% ( 1) 00:08:25.308 10.388 - 10.437: 98.6183% ( 1) 00:08:25.308 10.437 - 10.486: 98.6415% ( 4) 00:08:25.308 10.486 - 10.535: 98.6473% ( 1) 00:08:25.308 10.535 - 10.585: 98.6647% ( 3) 00:08:25.308 10.585 - 10.634: 98.6880% ( 4) 00:08:25.308 10.683 - 10.732: 98.6938% ( 1) 00:08:25.308 10.880 - 10.929: 98.6996% ( 1) 00:08:25.308 10.929 - 10.978: 98.7112% ( 2) 00:08:25.308 11.077 - 11.126: 98.7228% ( 2) 00:08:25.308 11.225 - 11.274: 98.7286% ( 1) 00:08:25.308 11.274 - 11.323: 98.7344% ( 1) 00:08:25.308 11.569 - 11.618: 98.7402% ( 1) 00:08:25.308 11.717 - 11.766: 98.7460% ( 1) 00:08:25.308 11.766 - 11.815: 98.7576% ( 2) 00:08:25.308 12.012 - 12.062: 98.7634% ( 1) 00:08:25.308 12.258 - 12.308: 98.7692% ( 1) 00:08:25.308 12.357 - 12.406: 98.7808% ( 2) 00:08:25.308 12.406 - 12.455: 98.7866% ( 1) 00:08:25.308 12.455 - 12.505: 98.7983% ( 2) 00:08:25.308 12.505 - 12.554: 98.8041% ( 1) 00:08:25.308 12.554 - 12.603: 98.8157% ( 2) 00:08:25.308 12.603 - 12.702: 98.8215% ( 1) 00:08:25.308 12.702 - 12.800: 98.8505% ( 5) 00:08:25.308 12.800 - 12.898: 98.8970% ( 8) 00:08:25.308 12.898 - 12.997: 98.9434% ( 8) 00:08:25.308 12.997 - 13.095: 98.9898% ( 8) 00:08:25.308 13.095 - 13.194: 99.0595% ( 12) 00:08:25.308 13.194 - 13.292: 99.0943% ( 6) 00:08:25.308 13.292 - 13.391: 99.1640% ( 12) 00:08:25.308 13.391 - 13.489: 99.1988% ( 6) 00:08:25.308 13.489 - 13.588: 99.2453% ( 8) 00:08:25.308 13.588 - 13.686: 99.2917% ( 8) 00:08:25.308 13.686 - 13.785: 99.3498% ( 10) 00:08:25.308 13.785 - 13.883: 99.3846% ( 6) 00:08:25.308 13.883 - 13.982: 99.4543% ( 12) 00:08:25.308 13.982 - 14.080: 99.5239% ( 12) 00:08:25.308 14.080 - 14.178: 99.5472% ( 4) 00:08:25.308 14.178 - 14.277: 99.5820% ( 6) 00:08:25.308 14.277 - 14.375: 99.6401% ( 10) 00:08:25.308 14.375 - 14.474: 99.6807% ( 7) 00:08:25.308 14.474 - 14.572: 99.6923% ( 2) 00:08:25.308 14.572 - 14.671: 99.7155% ( 4) 00:08:25.308 14.671 - 14.769: 99.7213% ( 1) 00:08:25.308 14.966 - 15.065: 99.7446% ( 4) 00:08:25.308 15.065 - 15.163: 99.7620% ( 3) 00:08:25.308 15.163 - 15.262: 99.7794% ( 3) 00:08:25.308 15.262 - 15.360: 99.7852% ( 1) 00:08:25.308 15.360 - 15.458: 99.7968% ( 2) 00:08:25.308 15.458 - 15.557: 99.8026% ( 1) 00:08:25.308 15.557 - 15.655: 99.8084% ( 1) 00:08:25.308 15.754 - 15.852: 99.8200% ( 2) 00:08:25.308 15.951 - 16.049: 99.8258% ( 1) 00:08:25.308 16.049 - 16.148: 99.8316% ( 1) 00:08:25.308 16.148 - 16.246: 99.8374% ( 1) 00:08:25.308 16.542 - 16.640: 99.8491% ( 2) 00:08:25.308 16.738 - 16.837: 99.8549% ( 1) 00:08:25.308 16.935 - 17.034: 99.8665% ( 2) 00:08:25.308 17.034 - 17.132: 99.8723% ( 1) 00:08:25.308 17.132 - 17.231: 99.8781% ( 1) 00:08:25.308 17.231 - 17.329: 99.8839% ( 1) 00:08:25.309 17.329 - 17.428: 99.8897% ( 1) 00:08:25.309 17.526 - 17.625: 99.8955% ( 1) 00:08:25.309 18.018 - 18.117: 99.9013% ( 1) 00:08:25.309 18.314 - 18.412: 99.9071% ( 1) 00:08:25.309 18.412 - 18.511: 99.9129% ( 1) 00:08:25.309 18.511 - 18.609: 99.9187% ( 1) 00:08:25.309 18.609 - 18.708: 99.9245% ( 1) 00:08:25.309 19.003 - 19.102: 99.9303% ( 1) 00:08:25.309 20.677 - 20.775: 99.9361% ( 1) 00:08:25.309 20.972 - 21.071: 99.9419% ( 1) 00:08:25.309 21.169 - 21.268: 99.9478% ( 1) 00:08:25.309 21.662 - 21.760: 99.9536% ( 1) 00:08:25.309 21.858 - 21.957: 99.9594% ( 1) 00:08:25.309 25.797 - 25.994: 99.9652% ( 1) 00:08:25.309 42.535 - 42.732: 99.9710% ( 1) 00:08:25.309 50.018 - 50.215: 99.9768% ( 1) 00:08:25.309 50.215 - 50.412: 99.9826% ( 1) 00:08:25.309 57.502 - 57.895: 99.9884% ( 1) 00:08:25.309 63.409 - 63.803: 99.9942% ( 1) 00:08:25.309 67.742 - 68.135: 100.0000% ( 1) 00:08:25.309 00:08:25.309 00:08:25.309 real 0m1.198s 00:08:25.309 user 0m1.058s 00:08:25.309 sys 0m0.088s 00:08:25.309 05:10:18 nvme.nvme_overhead -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:25.309 05:10:18 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:08:25.309 ************************************ 00:08:25.309 END TEST nvme_overhead 00:08:25.309 ************************************ 00:08:25.309 05:10:18 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:25.309 05:10:18 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:08:25.309 05:10:18 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:25.309 05:10:18 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:25.309 ************************************ 00:08:25.309 START TEST nvme_arbitration 00:08:25.309 ************************************ 00:08:25.309 05:10:18 nvme.nvme_arbitration -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:28.619 Initializing NVMe Controllers 00:08:28.619 Attached to 0000:00:10.0 00:08:28.619 Attached to 0000:00:11.0 00:08:28.619 Attached to 0000:00:13.0 00:08:28.619 Attached to 0000:00:12.0 00:08:28.619 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:08:28.619 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:08:28.619 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:08:28.619 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:08:28.619 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:08:28.619 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:08:28.619 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:08:28.619 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:08:28.619 Initialization complete. Launching workers. 00:08:28.619 Starting thread on core 1 with urgent priority queue 00:08:28.619 Starting thread on core 2 with urgent priority queue 00:08:28.619 Starting thread on core 3 with urgent priority queue 00:08:28.619 Starting thread on core 0 with urgent priority queue 00:08:28.619 QEMU NVMe Ctrl (12340 ) core 0: 6378.67 IO/s 15.68 secs/100000 ios 00:08:28.619 QEMU NVMe Ctrl (12342 ) core 0: 6378.67 IO/s 15.68 secs/100000 ios 00:08:28.619 QEMU NVMe Ctrl (12341 ) core 1: 6378.67 IO/s 15.68 secs/100000 ios 00:08:28.619 QEMU NVMe Ctrl (12342 ) core 1: 6378.67 IO/s 15.68 secs/100000 ios 00:08:28.619 QEMU NVMe Ctrl (12343 ) core 2: 6421.33 IO/s 15.57 secs/100000 ios 00:08:28.620 QEMU NVMe Ctrl (12342 ) core 3: 6422.00 IO/s 15.57 secs/100000 ios 00:08:28.620 ======================================================== 00:08:28.620 00:08:28.620 00:08:28.620 real 0m3.224s 00:08:28.620 user 0m9.042s 00:08:28.620 sys 0m0.106s 00:08:28.620 05:10:21 nvme.nvme_arbitration -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:28.620 ************************************ 00:08:28.620 05:10:21 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:08:28.620 END TEST nvme_arbitration 00:08:28.620 ************************************ 00:08:28.620 05:10:21 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:28.620 05:10:21 nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:28.620 05:10:21 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:28.620 05:10:21 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:28.620 ************************************ 00:08:28.620 START TEST nvme_single_aen 00:08:28.620 ************************************ 00:08:28.620 05:10:21 nvme.nvme_single_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:28.620 Asynchronous Event Request test 00:08:28.620 Attached to 0000:00:10.0 00:08:28.620 Attached to 0000:00:11.0 00:08:28.620 Attached to 0000:00:13.0 00:08:28.620 Attached to 0000:00:12.0 00:08:28.620 Reset controller to setup AER completions for this process 00:08:28.620 Registering asynchronous event callbacks... 00:08:28.620 Getting orig temperature thresholds of all controllers 00:08:28.620 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:28.620 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:28.620 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:28.620 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:28.620 Setting all controllers temperature threshold low to trigger AER 00:08:28.620 Waiting for all controllers temperature threshold to be set lower 00:08:28.620 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:28.620 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:28.620 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:28.620 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:28.620 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:28.620 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:28.620 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:28.620 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:28.620 Waiting for all controllers to trigger AER and reset threshold 00:08:28.620 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:28.620 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:28.620 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:28.620 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:28.620 Cleaning up... 00:08:28.620 00:08:28.620 real 0m0.178s 00:08:28.620 user 0m0.066s 00:08:28.620 sys 0m0.077s 00:08:28.620 ************************************ 00:08:28.620 END TEST nvme_single_aen 00:08:28.620 ************************************ 00:08:28.620 05:10:21 nvme.nvme_single_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:28.620 05:10:21 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:08:28.620 05:10:21 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:08:28.620 05:10:21 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:28.620 05:10:21 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:28.620 05:10:21 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:28.620 ************************************ 00:08:28.620 START TEST nvme_doorbell_aers 00:08:28.620 ************************************ 00:08:28.620 05:10:21 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1125 -- # nvme_doorbell_aers 00:08:28.620 05:10:21 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:08:28.620 05:10:21 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:08:28.620 05:10:21 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:08:28.620 05:10:21 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:08:28.620 05:10:21 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:28.620 05:10:21 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # local bdfs 00:08:28.620 05:10:21 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:28.620 05:10:21 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:28.620 05:10:21 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:28.878 05:10:21 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:08:28.878 05:10:21 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:28.878 05:10:21 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:28.878 05:10:21 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:28.878 [2024-11-10 05:10:22.072330] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75424) is not found. Dropping the request. 00:08:38.857 Executing: test_write_invalid_db 00:08:38.857 Waiting for AER completion... 00:08:38.857 Failure: test_write_invalid_db 00:08:38.857 00:08:38.857 Executing: test_invalid_db_write_overflow_sq 00:08:38.857 Waiting for AER completion... 00:08:38.857 Failure: test_invalid_db_write_overflow_sq 00:08:38.857 00:08:38.857 Executing: test_invalid_db_write_overflow_cq 00:08:38.857 Waiting for AER completion... 00:08:38.857 Failure: test_invalid_db_write_overflow_cq 00:08:38.857 00:08:38.857 05:10:31 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:38.857 05:10:31 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:39.116 [2024-11-10 05:10:32.107583] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75424) is not found. Dropping the request. 00:08:49.117 Executing: test_write_invalid_db 00:08:49.117 Waiting for AER completion... 00:08:49.117 Failure: test_write_invalid_db 00:08:49.117 00:08:49.117 Executing: test_invalid_db_write_overflow_sq 00:08:49.117 Waiting for AER completion... 00:08:49.117 Failure: test_invalid_db_write_overflow_sq 00:08:49.117 00:08:49.117 Executing: test_invalid_db_write_overflow_cq 00:08:49.117 Waiting for AER completion... 00:08:49.117 Failure: test_invalid_db_write_overflow_cq 00:08:49.117 00:08:49.117 05:10:41 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:49.117 05:10:41 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:49.117 [2024-11-10 05:10:42.146218] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75424) is not found. Dropping the request. 00:08:59.084 Executing: test_write_invalid_db 00:08:59.084 Waiting for AER completion... 00:08:59.084 Failure: test_write_invalid_db 00:08:59.084 00:08:59.084 Executing: test_invalid_db_write_overflow_sq 00:08:59.084 Waiting for AER completion... 00:08:59.084 Failure: test_invalid_db_write_overflow_sq 00:08:59.084 00:08:59.084 Executing: test_invalid_db_write_overflow_cq 00:08:59.084 Waiting for AER completion... 00:08:59.084 Failure: test_invalid_db_write_overflow_cq 00:08:59.084 00:08:59.084 05:10:51 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:59.084 05:10:52 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:59.084 [2024-11-10 05:10:52.165849] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75424) is not found. Dropping the request. 00:09:09.054 Executing: test_write_invalid_db 00:09:09.054 Waiting for AER completion... 00:09:09.054 Failure: test_write_invalid_db 00:09:09.054 00:09:09.054 Executing: test_invalid_db_write_overflow_sq 00:09:09.054 Waiting for AER completion... 00:09:09.054 Failure: test_invalid_db_write_overflow_sq 00:09:09.054 00:09:09.054 Executing: test_invalid_db_write_overflow_cq 00:09:09.054 Waiting for AER completion... 00:09:09.054 Failure: test_invalid_db_write_overflow_cq 00:09:09.054 00:09:09.054 ************************************ 00:09:09.054 END TEST nvme_doorbell_aers 00:09:09.054 ************************************ 00:09:09.054 00:09:09.054 real 0m40.189s 00:09:09.054 user 0m34.241s 00:09:09.054 sys 0m5.596s 00:09:09.054 05:11:02 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:09.054 05:11:02 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:09:09.054 05:11:02 nvme -- nvme/nvme.sh@97 -- # uname 00:09:09.054 05:11:02 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:09:09.054 05:11:02 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:09:09.054 05:11:02 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:09:09.054 05:11:02 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:09.054 05:11:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:09.054 ************************************ 00:09:09.054 START TEST nvme_multi_aen 00:09:09.054 ************************************ 00:09:09.054 05:11:02 nvme.nvme_multi_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:09:09.054 [2024-11-10 05:11:02.234690] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75424) is not found. Dropping the request. 00:09:09.054 [2024-11-10 05:11:02.234761] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75424) is not found. Dropping the request. 00:09:09.054 [2024-11-10 05:11:02.234774] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75424) is not found. Dropping the request. 00:09:09.054 [2024-11-10 05:11:02.235926] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75424) is not found. Dropping the request. 00:09:09.054 [2024-11-10 05:11:02.235948] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75424) is not found. Dropping the request. 00:09:09.054 [2024-11-10 05:11:02.235957] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75424) is not found. Dropping the request. 00:09:09.054 [2024-11-10 05:11:02.236867] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75424) is not found. Dropping the request. 00:09:09.054 [2024-11-10 05:11:02.236894] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75424) is not found. Dropping the request. 00:09:09.054 [2024-11-10 05:11:02.236903] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75424) is not found. Dropping the request. 00:09:09.054 [2024-11-10 05:11:02.237770] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75424) is not found. Dropping the request. 00:09:09.054 [2024-11-10 05:11:02.237795] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75424) is not found. Dropping the request. 00:09:09.054 [2024-11-10 05:11:02.237804] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75424) is not found. Dropping the request. 00:09:09.054 Child process pid: 75945 00:09:09.312 [Child] Asynchronous Event Request test 00:09:09.312 [Child] Attached to 0000:00:10.0 00:09:09.312 [Child] Attached to 0000:00:11.0 00:09:09.312 [Child] Attached to 0000:00:13.0 00:09:09.312 [Child] Attached to 0000:00:12.0 00:09:09.312 [Child] Registering asynchronous event callbacks... 00:09:09.312 [Child] Getting orig temperature thresholds of all controllers 00:09:09.312 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:09.312 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:09.312 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:09.312 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:09.312 [Child] Waiting for all controllers to trigger AER and reset threshold 00:09:09.312 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:09.312 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:09.312 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:09.312 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:09.312 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:09.312 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:09.312 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:09.312 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:09.312 [Child] Cleaning up... 00:09:09.312 Asynchronous Event Request test 00:09:09.312 Attached to 0000:00:10.0 00:09:09.312 Attached to 0000:00:11.0 00:09:09.312 Attached to 0000:00:13.0 00:09:09.312 Attached to 0000:00:12.0 00:09:09.312 Reset controller to setup AER completions for this process 00:09:09.312 Registering asynchronous event callbacks... 00:09:09.312 Getting orig temperature thresholds of all controllers 00:09:09.312 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:09.312 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:09.312 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:09.312 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:09.312 Setting all controllers temperature threshold low to trigger AER 00:09:09.312 Waiting for all controllers temperature threshold to be set lower 00:09:09.312 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:09.312 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:09:09.312 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:09.312 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:09:09.312 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:09.312 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:09:09.312 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:09.312 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:09:09.312 Waiting for all controllers to trigger AER and reset threshold 00:09:09.312 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:09.312 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:09.312 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:09.312 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:09.312 Cleaning up... 00:09:09.312 00:09:09.312 real 0m0.396s 00:09:09.312 user 0m0.117s 00:09:09.312 sys 0m0.167s 00:09:09.312 05:11:02 nvme.nvme_multi_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:09.312 05:11:02 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:09:09.312 ************************************ 00:09:09.312 END TEST nvme_multi_aen 00:09:09.312 ************************************ 00:09:09.312 05:11:02 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:09:09.312 05:11:02 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:09.312 05:11:02 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:09.312 05:11:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:09.312 ************************************ 00:09:09.312 START TEST nvme_startup 00:09:09.312 ************************************ 00:09:09.312 05:11:02 nvme.nvme_startup -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:09:09.570 Initializing NVMe Controllers 00:09:09.570 Attached to 0000:00:10.0 00:09:09.570 Attached to 0000:00:11.0 00:09:09.570 Attached to 0000:00:13.0 00:09:09.570 Attached to 0000:00:12.0 00:09:09.570 Initialization complete. 00:09:09.570 Time used:135444.953 (us). 00:09:09.570 00:09:09.570 real 0m0.191s 00:09:09.570 user 0m0.060s 00:09:09.570 sys 0m0.083s 00:09:09.570 05:11:02 nvme.nvme_startup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:09.570 05:11:02 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:09:09.570 ************************************ 00:09:09.570 END TEST nvme_startup 00:09:09.570 ************************************ 00:09:09.570 05:11:02 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:09:09.570 05:11:02 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:09.570 05:11:02 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:09.570 05:11:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:09.570 ************************************ 00:09:09.570 START TEST nvme_multi_secondary 00:09:09.570 ************************************ 00:09:09.570 05:11:02 nvme.nvme_multi_secondary -- common/autotest_common.sh@1125 -- # nvme_multi_secondary 00:09:09.570 05:11:02 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=75995 00:09:09.570 05:11:02 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=75996 00:09:09.570 05:11:02 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:09:09.570 05:11:02 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:09:09.570 05:11:02 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:09:12.848 Initializing NVMe Controllers 00:09:12.848 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:12.848 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:12.848 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:12.848 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:12.848 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:09:12.848 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:09:12.848 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:09:12.848 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:09:12.848 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:09:12.848 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:09:12.848 Initialization complete. Launching workers. 00:09:12.848 ======================================================== 00:09:12.848 Latency(us) 00:09:12.848 Device Information : IOPS MiB/s Average min max 00:09:12.848 PCIE (0000:00:10.0) NSID 1 from core 1: 7220.96 28.21 2214.41 888.86 6280.31 00:09:12.848 PCIE (0000:00:11.0) NSID 1 from core 1: 7220.96 28.21 2215.40 926.51 6451.35 00:09:12.848 PCIE (0000:00:13.0) NSID 1 from core 1: 7220.96 28.21 2215.50 899.55 6384.68 00:09:12.848 PCIE (0000:00:12.0) NSID 1 from core 1: 7220.96 28.21 2215.45 931.91 5884.14 00:09:12.848 PCIE (0000:00:12.0) NSID 2 from core 1: 7220.96 28.21 2215.53 931.18 5866.17 00:09:12.848 PCIE (0000:00:12.0) NSID 3 from core 1: 7220.96 28.21 2215.51 910.22 6179.95 00:09:12.848 ======================================================== 00:09:12.848 Total : 43325.77 169.24 2215.30 888.86 6451.35 00:09:12.848 00:09:13.107 Initializing NVMe Controllers 00:09:13.107 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:13.107 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:13.107 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:13.107 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:13.107 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:09:13.107 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:09:13.107 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:09:13.107 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:09:13.107 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:09:13.107 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:09:13.107 Initialization complete. Launching workers. 00:09:13.107 ======================================================== 00:09:13.107 Latency(us) 00:09:13.107 Device Information : IOPS MiB/s Average min max 00:09:13.107 PCIE (0000:00:10.0) NSID 1 from core 2: 3008.52 11.75 5316.54 1071.81 12551.10 00:09:13.107 PCIE (0000:00:11.0) NSID 1 from core 2: 3008.52 11.75 5317.98 1072.30 12854.63 00:09:13.107 PCIE (0000:00:13.0) NSID 1 from core 2: 3008.52 11.75 5317.55 1123.32 12850.92 00:09:13.107 PCIE (0000:00:12.0) NSID 1 from core 2: 3008.52 11.75 5318.20 1015.87 12627.50 00:09:13.107 PCIE (0000:00:12.0) NSID 2 from core 2: 3008.52 11.75 5317.76 1081.27 12261.62 00:09:13.107 PCIE (0000:00:12.0) NSID 3 from core 2: 3008.52 11.75 5317.75 1097.27 12450.10 00:09:13.107 ======================================================== 00:09:13.107 Total : 18051.14 70.51 5317.63 1015.87 12854.63 00:09:13.107 00:09:13.107 05:11:06 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 75995 00:09:15.006 Initializing NVMe Controllers 00:09:15.006 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:15.006 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:15.006 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:15.006 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:15.006 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:15.006 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:15.006 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:15.006 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:15.006 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:15.006 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:15.006 Initialization complete. Launching workers. 00:09:15.006 ======================================================== 00:09:15.006 Latency(us) 00:09:15.006 Device Information : IOPS MiB/s Average min max 00:09:15.006 PCIE (0000:00:10.0) NSID 1 from core 0: 10443.03 40.79 1530.93 736.66 7504.42 00:09:15.006 PCIE (0000:00:11.0) NSID 1 from core 0: 10443.03 40.79 1531.74 729.15 8766.21 00:09:15.006 PCIE (0000:00:13.0) NSID 1 from core 0: 10443.03 40.79 1531.72 633.74 8248.21 00:09:15.006 PCIE (0000:00:12.0) NSID 1 from core 0: 10443.03 40.79 1531.70 572.44 7818.58 00:09:15.006 PCIE (0000:00:12.0) NSID 2 from core 0: 10443.03 40.79 1531.68 498.21 7697.36 00:09:15.006 PCIE (0000:00:12.0) NSID 3 from core 0: 10443.03 40.79 1531.66 399.65 7454.66 00:09:15.006 ======================================================== 00:09:15.006 Total : 62658.20 244.76 1531.57 399.65 8766.21 00:09:15.006 00:09:15.006 05:11:07 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 75996 00:09:15.006 05:11:07 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=76071 00:09:15.006 05:11:07 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:09:15.006 05:11:07 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=76072 00:09:15.006 05:11:07 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:09:15.006 05:11:07 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:09:18.289 Initializing NVMe Controllers 00:09:18.289 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:18.289 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:18.289 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:18.289 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:18.289 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:09:18.289 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:09:18.289 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:09:18.289 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:09:18.289 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:09:18.289 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:09:18.289 Initialization complete. Launching workers. 00:09:18.289 ======================================================== 00:09:18.289 Latency(us) 00:09:18.289 Device Information : IOPS MiB/s Average min max 00:09:18.289 PCIE (0000:00:10.0) NSID 1 from core 1: 7174.33 28.02 2228.79 795.17 6073.03 00:09:18.289 PCIE (0000:00:11.0) NSID 1 from core 1: 7174.33 28.02 2230.16 807.09 6086.67 00:09:18.289 PCIE (0000:00:13.0) NSID 1 from core 1: 7174.33 28.02 2230.12 815.48 6468.84 00:09:18.289 PCIE (0000:00:12.0) NSID 1 from core 1: 7174.33 28.02 2230.11 821.91 6501.55 00:09:18.289 PCIE (0000:00:12.0) NSID 2 from core 1: 7174.33 28.02 2230.11 812.59 6323.42 00:09:18.289 PCIE (0000:00:12.0) NSID 3 from core 1: 7174.33 28.02 2230.14 820.05 6005.52 00:09:18.289 ======================================================== 00:09:18.289 Total : 43045.99 168.15 2229.91 795.17 6501.55 00:09:18.289 00:09:18.289 Initializing NVMe Controllers 00:09:18.289 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:18.289 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:18.289 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:18.289 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:18.289 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:18.289 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:18.289 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:18.289 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:18.289 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:18.289 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:18.289 Initialization complete. Launching workers. 00:09:18.289 ======================================================== 00:09:18.289 Latency(us) 00:09:18.289 Device Information : IOPS MiB/s Average min max 00:09:18.289 PCIE (0000:00:10.0) NSID 1 from core 0: 7303.44 28.53 2189.36 780.54 14781.37 00:09:18.289 PCIE (0000:00:11.0) NSID 1 from core 0: 7303.44 28.53 2190.24 807.01 14704.30 00:09:18.289 PCIE (0000:00:13.0) NSID 1 from core 0: 7303.44 28.53 2190.15 780.88 14302.25 00:09:18.289 PCIE (0000:00:12.0) NSID 1 from core 0: 7303.44 28.53 2190.07 781.40 13643.05 00:09:18.289 PCIE (0000:00:12.0) NSID 2 from core 0: 7303.44 28.53 2189.99 590.85 13685.41 00:09:18.289 PCIE (0000:00:12.0) NSID 3 from core 0: 7303.44 28.53 2189.92 403.15 14905.15 00:09:18.289 ======================================================== 00:09:18.289 Total : 43820.63 171.17 2189.96 403.15 14905.15 00:09:18.289 00:09:20.195 Initializing NVMe Controllers 00:09:20.195 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:20.195 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:20.195 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:20.195 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:20.195 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:09:20.195 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:09:20.195 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:09:20.195 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:09:20.195 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:09:20.195 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:09:20.195 Initialization complete. Launching workers. 00:09:20.195 ======================================================== 00:09:20.195 Latency(us) 00:09:20.195 Device Information : IOPS MiB/s Average min max 00:09:20.195 PCIE (0000:00:10.0) NSID 1 from core 2: 4300.83 16.80 3718.35 803.28 13216.26 00:09:20.195 PCIE (0000:00:11.0) NSID 1 from core 2: 4300.83 16.80 3719.44 759.03 13074.04 00:09:20.195 PCIE (0000:00:13.0) NSID 1 from core 2: 4300.83 16.80 3719.58 810.08 12574.68 00:09:20.195 PCIE (0000:00:12.0) NSID 1 from core 2: 4300.83 16.80 3719.90 820.69 12385.15 00:09:20.195 PCIE (0000:00:12.0) NSID 2 from core 2: 4300.83 16.80 3719.47 813.33 12392.22 00:09:20.195 PCIE (0000:00:12.0) NSID 3 from core 2: 4300.83 16.80 3719.80 715.79 13647.74 00:09:20.195 ======================================================== 00:09:20.195 Total : 25804.99 100.80 3719.42 715.79 13647.74 00:09:20.195 00:09:20.195 ************************************ 00:09:20.195 END TEST nvme_multi_secondary 00:09:20.195 ************************************ 00:09:20.195 05:11:13 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 76071 00:09:20.195 05:11:13 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 76072 00:09:20.195 00:09:20.195 real 0m10.428s 00:09:20.195 user 0m18.282s 00:09:20.195 sys 0m0.558s 00:09:20.195 05:11:13 nvme.nvme_multi_secondary -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:20.195 05:11:13 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:09:20.195 05:11:13 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:09:20.195 05:11:13 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:09:20.195 05:11:13 nvme -- common/autotest_common.sh@1089 -- # [[ -e /proc/75039 ]] 00:09:20.195 05:11:13 nvme -- common/autotest_common.sh@1090 -- # kill 75039 00:09:20.195 05:11:13 nvme -- common/autotest_common.sh@1091 -- # wait 75039 00:09:20.195 [2024-11-10 05:11:13.201494] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75944) is not found. Dropping the request. 00:09:20.195 [2024-11-10 05:11:13.202205] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75944) is not found. Dropping the request. 00:09:20.195 [2024-11-10 05:11:13.202337] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75944) is not found. Dropping the request. 00:09:20.195 [2024-11-10 05:11:13.202427] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75944) is not found. Dropping the request. 00:09:20.195 [2024-11-10 05:11:13.203897] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75944) is not found. Dropping the request. 00:09:20.195 [2024-11-10 05:11:13.203969] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75944) is not found. Dropping the request. 00:09:20.195 [2024-11-10 05:11:13.204013] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75944) is not found. Dropping the request. 00:09:20.195 [2024-11-10 05:11:13.204027] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75944) is not found. Dropping the request. 00:09:20.195 [2024-11-10 05:11:13.204522] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75944) is not found. Dropping the request. 00:09:20.195 [2024-11-10 05:11:13.204559] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75944) is not found. Dropping the request. 00:09:20.195 [2024-11-10 05:11:13.204571] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75944) is not found. Dropping the request. 00:09:20.195 [2024-11-10 05:11:13.204584] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75944) is not found. Dropping the request. 00:09:20.195 [2024-11-10 05:11:13.205033] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75944) is not found. Dropping the request. 00:09:20.195 [2024-11-10 05:11:13.205070] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75944) is not found. Dropping the request. 00:09:20.195 [2024-11-10 05:11:13.205081] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75944) is not found. Dropping the request. 00:09:20.195 [2024-11-10 05:11:13.205092] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75944) is not found. Dropping the request. 00:09:20.195 05:11:13 nvme -- common/autotest_common.sh@1093 -- # rm -f /var/run/spdk_stub0 00:09:20.195 05:11:13 nvme -- common/autotest_common.sh@1097 -- # echo 2 00:09:20.195 05:11:13 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:20.195 05:11:13 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:20.195 05:11:13 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:20.195 05:11:13 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:20.195 ************************************ 00:09:20.195 START TEST bdev_nvme_reset_stuck_adm_cmd 00:09:20.195 ************************************ 00:09:20.195 05:11:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:20.195 * Looking for test storage... 00:09:20.195 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:20.195 05:11:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:20.195 05:11:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # lcov --version 00:09:20.195 05:11:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:20.457 05:11:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:20.457 05:11:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:20.457 05:11:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:20.457 05:11:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:20.457 05:11:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:09:20.457 05:11:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:09:20.457 05:11:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:09:20.457 05:11:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:09:20.457 05:11:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:09:20.457 05:11:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:09:20.457 05:11:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:09:20.457 05:11:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:20.457 05:11:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:09:20.458 05:11:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:09:20.458 05:11:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:20.458 05:11:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:20.458 05:11:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:09:20.458 05:11:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:09:20.458 05:11:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:20.458 05:11:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:09:20.458 05:11:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:09:20.458 05:11:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:09:20.458 05:11:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:09:20.458 05:11:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:20.458 05:11:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:09:20.458 05:11:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:09:20.458 05:11:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:20.458 05:11:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:20.458 05:11:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:09:20.458 05:11:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:20.458 05:11:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:20.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.458 --rc genhtml_branch_coverage=1 00:09:20.458 --rc genhtml_function_coverage=1 00:09:20.458 --rc genhtml_legend=1 00:09:20.458 --rc geninfo_all_blocks=1 00:09:20.458 --rc geninfo_unexecuted_blocks=1 00:09:20.458 00:09:20.458 ' 00:09:20.458 05:11:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:20.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.458 --rc genhtml_branch_coverage=1 00:09:20.458 --rc genhtml_function_coverage=1 00:09:20.458 --rc genhtml_legend=1 00:09:20.458 --rc geninfo_all_blocks=1 00:09:20.458 --rc geninfo_unexecuted_blocks=1 00:09:20.458 00:09:20.458 ' 00:09:20.458 05:11:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:20.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.458 --rc genhtml_branch_coverage=1 00:09:20.458 --rc genhtml_function_coverage=1 00:09:20.458 --rc genhtml_legend=1 00:09:20.458 --rc geninfo_all_blocks=1 00:09:20.458 --rc geninfo_unexecuted_blocks=1 00:09:20.458 00:09:20.458 ' 00:09:20.458 05:11:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:20.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.458 --rc genhtml_branch_coverage=1 00:09:20.458 --rc genhtml_function_coverage=1 00:09:20.458 --rc genhtml_legend=1 00:09:20.458 --rc geninfo_all_blocks=1 00:09:20.458 --rc geninfo_unexecuted_blocks=1 00:09:20.458 00:09:20.458 ' 00:09:20.458 05:11:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:09:20.458 05:11:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:09:20.458 05:11:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:09:20.458 05:11:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:09:20.458 05:11:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:09:20.458 05:11:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:09:20.458 05:11:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # bdfs=() 00:09:20.458 05:11:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # local bdfs 00:09:20.458 05:11:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:09:20.458 05:11:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:09:20.458 05:11:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # bdfs=() 00:09:20.458 05:11:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # local bdfs 00:09:20.458 05:11:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:20.458 05:11:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:20.458 05:11:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:09:20.458 05:11:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:09:20.458 05:11:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:20.458 05:11:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:09:20.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:20.458 05:11:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:09:20.458 05:11:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:09:20.458 05:11:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=76228 00:09:20.458 05:11:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:20.458 05:11:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 76228 00:09:20.458 05:11:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@831 -- # '[' -z 76228 ']' 00:09:20.458 05:11:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:20.458 05:11:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:20.458 05:11:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:20.458 05:11:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:20.458 05:11:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:20.458 05:11:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:09:20.458 [2024-11-10 05:11:13.606589] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:20.458 [2024-11-10 05:11:13.607046] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76228 ] 00:09:20.720 [2024-11-10 05:11:13.769861] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:20.720 [2024-11-10 05:11:13.822892] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:20.720 [2024-11-10 05:11:13.823154] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:09:20.720 [2024-11-10 05:11:13.823785] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:09:20.720 [2024-11-10 05:11:13.823798] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.291 05:11:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:21.291 05:11:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # return 0 00:09:21.291 05:11:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:09:21.291 05:11:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.291 05:11:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:21.552 nvme0n1 00:09:21.552 05:11:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.552 05:11:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:09:21.552 05:11:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_ed6Qd.txt 00:09:21.552 05:11:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:09:21.552 05:11:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.552 05:11:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:21.552 true 00:09:21.552 05:11:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.552 05:11:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:09:21.552 05:11:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1731215474 00:09:21.552 05:11:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=76251 00:09:21.552 05:11:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:21.552 05:11:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:09:21.552 05:11:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:09:23.469 05:11:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:09:23.469 05:11:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.469 05:11:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:23.469 [2024-11-10 05:11:16.558113] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:09:23.470 [2024-11-10 05:11:16.558415] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:09:23.470 [2024-11-10 05:11:16.558438] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:23.470 [2024-11-10 05:11:16.558462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:23.470 [2024-11-10 05:11:16.560059] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:23.470 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 76251 00:09:23.470 05:11:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.470 05:11:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 76251 00:09:23.470 05:11:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 76251 00:09:23.470 05:11:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:09:23.470 05:11:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:09:23.470 05:11:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:09:23.470 05:11:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.470 05:11:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:23.470 05:11:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.470 05:11:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:09:23.470 05:11:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_ed6Qd.txt 00:09:23.470 05:11:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:09:23.470 05:11:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:09:23.470 05:11:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:23.470 05:11:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:23.470 05:11:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:23.470 05:11:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:23.470 05:11:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:23.470 05:11:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:23.470 05:11:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:09:23.470 05:11:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:09:23.470 05:11:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:09:23.470 05:11:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:23.470 05:11:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:23.470 05:11:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:23.470 05:11:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:23.470 05:11:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:23.470 05:11:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:23.470 05:11:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:09:23.470 05:11:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:09:23.470 05:11:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_ed6Qd.txt 00:09:23.470 05:11:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 76228 00:09:23.470 05:11:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@950 -- # '[' -z 76228 ']' 00:09:23.470 05:11:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # kill -0 76228 00:09:23.470 05:11:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # uname 00:09:23.470 05:11:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:23.470 05:11:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76228 00:09:23.470 killing process with pid 76228 00:09:23.470 05:11:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:23.470 05:11:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:23.470 05:11:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76228' 00:09:23.470 05:11:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@969 -- # kill 76228 00:09:23.470 05:11:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@974 -- # wait 76228 00:09:23.730 05:11:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:09:23.730 05:11:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:09:23.730 ************************************ 00:09:23.730 END TEST bdev_nvme_reset_stuck_adm_cmd 00:09:23.730 ************************************ 00:09:23.730 00:09:23.730 real 0m3.602s 00:09:23.730 user 0m12.636s 00:09:23.730 sys 0m0.573s 00:09:23.730 05:11:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:23.730 05:11:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:23.730 05:11:16 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:09:23.730 05:11:16 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:09:23.730 05:11:16 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:23.730 05:11:16 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:23.730 05:11:16 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:23.730 ************************************ 00:09:23.730 START TEST nvme_fio 00:09:23.730 ************************************ 00:09:23.730 05:11:16 nvme.nvme_fio -- common/autotest_common.sh@1125 -- # nvme_fio_test 00:09:23.730 05:11:16 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:09:23.730 05:11:16 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:09:23.730 05:11:16 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:09:23.730 05:11:16 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # bdfs=() 00:09:23.730 05:11:16 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # local bdfs 00:09:23.730 05:11:16 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:23.730 05:11:16 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:23.730 05:11:16 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:09:23.991 05:11:17 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:09:23.991 05:11:17 nvme.nvme_fio -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:23.991 05:11:17 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:09:23.991 05:11:17 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:09:23.991 05:11:17 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:23.991 05:11:17 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:23.991 05:11:17 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:23.991 05:11:17 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:23.991 05:11:17 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:24.252 05:11:17 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:24.252 05:11:17 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:24.252 05:11:17 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:24.252 05:11:17 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:09:24.252 05:11:17 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:24.252 05:11:17 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:09:24.252 05:11:17 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:24.252 05:11:17 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:09:24.252 05:11:17 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:09:24.252 05:11:17 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:24.252 05:11:17 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:24.252 05:11:17 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:09:24.252 05:11:17 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:24.252 05:11:17 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:24.252 05:11:17 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:24.252 05:11:17 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:09:24.252 05:11:17 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:24.252 05:11:17 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:24.512 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:24.512 fio-3.35 00:09:24.512 Starting 1 thread 00:09:31.093 00:09:31.093 test: (groupid=0, jobs=1): err= 0: pid=76374: Sun Nov 10 05:11:23 2024 00:09:31.093 read: IOPS=21.3k, BW=83.1MiB/s (87.1MB/s)(166MiB/2001msec) 00:09:31.093 slat (nsec): min=3476, max=82583, avg=5181.07, stdev=2345.74 00:09:31.093 clat (usec): min=366, max=9144, avg=3005.19, stdev=954.54 00:09:31.093 lat (usec): min=370, max=9159, avg=3010.37, stdev=955.72 00:09:31.093 clat percentiles (usec): 00:09:31.093 | 1.00th=[ 2040], 5.00th=[ 2245], 10.00th=[ 2343], 20.00th=[ 2442], 00:09:31.093 | 30.00th=[ 2507], 40.00th=[ 2606], 50.00th=[ 2671], 60.00th=[ 2769], 00:09:31.093 | 70.00th=[ 2933], 80.00th=[ 3228], 90.00th=[ 4359], 95.00th=[ 5276], 00:09:31.093 | 99.00th=[ 6652], 99.50th=[ 6980], 99.90th=[ 7963], 99.95th=[ 8356], 00:09:31.093 | 99.99th=[ 8979] 00:09:31.093 bw ( KiB/s): min=82802, max=90456, per=100.00%, avg=85838.00, stdev=4064.85, samples=3 00:09:31.093 iops : min=20700, max=22614, avg=21459.33, stdev=1016.40, samples=3 00:09:31.093 write: IOPS=21.1k, BW=82.5MiB/s (86.5MB/s)(165MiB/2001msec); 0 zone resets 00:09:31.093 slat (nsec): min=3662, max=95237, avg=5396.24, stdev=2428.70 00:09:31.093 clat (usec): min=209, max=9084, avg=3012.58, stdev=944.80 00:09:31.093 lat (usec): min=214, max=9089, avg=3017.98, stdev=945.99 00:09:31.093 clat percentiles (usec): 00:09:31.093 | 1.00th=[ 2057], 5.00th=[ 2245], 10.00th=[ 2343], 20.00th=[ 2442], 00:09:31.093 | 30.00th=[ 2540], 40.00th=[ 2606], 50.00th=[ 2704], 60.00th=[ 2802], 00:09:31.093 | 70.00th=[ 2933], 80.00th=[ 3228], 90.00th=[ 4293], 95.00th=[ 5276], 00:09:31.093 | 99.00th=[ 6587], 99.50th=[ 6980], 99.90th=[ 7832], 99.95th=[ 8455], 00:09:31.093 | 99.99th=[ 8979] 00:09:31.093 bw ( KiB/s): min=82810, max=90032, per=100.00%, avg=86019.33, stdev=3677.41, samples=3 00:09:31.093 iops : min=20702, max=22508, avg=21504.67, stdev=919.57, samples=3 00:09:31.093 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.01% 00:09:31.093 lat (msec) : 2=0.67%, 4=87.19%, 10=12.10% 00:09:31.093 cpu : usr=99.10%, sys=0.15%, ctx=5, majf=0, minf=627 00:09:31.093 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:31.093 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:31.093 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:31.093 issued rwts: total=42559,42275,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:31.093 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:31.093 00:09:31.093 Run status group 0 (all jobs): 00:09:31.093 READ: bw=83.1MiB/s (87.1MB/s), 83.1MiB/s-83.1MiB/s (87.1MB/s-87.1MB/s), io=166MiB (174MB), run=2001-2001msec 00:09:31.093 WRITE: bw=82.5MiB/s (86.5MB/s), 82.5MiB/s-82.5MiB/s (86.5MB/s-86.5MB/s), io=165MiB (173MB), run=2001-2001msec 00:09:31.093 ----------------------------------------------------- 00:09:31.093 Suppressions used: 00:09:31.093 count bytes template 00:09:31.093 1 32 /usr/src/fio/parse.c 00:09:31.093 1 8 libtcmalloc_minimal.so 00:09:31.093 ----------------------------------------------------- 00:09:31.093 00:09:31.093 05:11:23 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:31.093 05:11:23 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:31.093 05:11:23 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:31.093 05:11:23 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:31.093 05:11:23 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:31.093 05:11:23 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:31.093 05:11:24 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:31.093 05:11:24 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:31.093 05:11:24 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:31.093 05:11:24 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:09:31.093 05:11:24 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:31.093 05:11:24 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:09:31.094 05:11:24 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:31.094 05:11:24 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:09:31.094 05:11:24 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:09:31.094 05:11:24 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:31.094 05:11:24 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:31.094 05:11:24 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:31.094 05:11:24 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:09:31.094 05:11:24 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:31.094 05:11:24 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:31.094 05:11:24 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:09:31.094 05:11:24 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:31.094 05:11:24 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:31.094 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:31.094 fio-3.35 00:09:31.094 Starting 1 thread 00:09:37.708 00:09:37.708 test: (groupid=0, jobs=1): err= 0: pid=76429: Sun Nov 10 05:11:29 2024 00:09:37.708 read: IOPS=20.2k, BW=78.7MiB/s (82.6MB/s)(158MiB/2001msec) 00:09:37.708 slat (nsec): min=4197, max=82404, avg=5271.83, stdev=2487.19 00:09:37.708 clat (usec): min=356, max=10910, avg=3160.44, stdev=1063.85 00:09:37.708 lat (usec): min=361, max=10954, avg=3165.71, stdev=1065.03 00:09:37.708 clat percentiles (usec): 00:09:37.708 | 1.00th=[ 2114], 5.00th=[ 2278], 10.00th=[ 2343], 20.00th=[ 2474], 00:09:37.708 | 30.00th=[ 2540], 40.00th=[ 2638], 50.00th=[ 2737], 60.00th=[ 2868], 00:09:37.708 | 70.00th=[ 3097], 80.00th=[ 3818], 90.00th=[ 4883], 95.00th=[ 5473], 00:09:37.708 | 99.00th=[ 6718], 99.50th=[ 7111], 99.90th=[ 8455], 99.95th=[ 9503], 00:09:37.708 | 99.99th=[10814] 00:09:37.708 bw ( KiB/s): min=79528, max=82176, per=100.00%, avg=80690.67, stdev=1353.17, samples=3 00:09:37.708 iops : min=19882, max=20544, avg=20172.67, stdev=338.29, samples=3 00:09:37.708 write: IOPS=20.1k, BW=78.5MiB/s (82.3MB/s)(157MiB/2001msec); 0 zone resets 00:09:37.708 slat (nsec): min=4248, max=81340, avg=5439.13, stdev=2458.92 00:09:37.708 clat (usec): min=269, max=10841, avg=3178.22, stdev=1057.64 00:09:37.708 lat (usec): min=274, max=10854, avg=3183.66, stdev=1058.82 00:09:37.708 clat percentiles (usec): 00:09:37.708 | 1.00th=[ 2114], 5.00th=[ 2278], 10.00th=[ 2376], 20.00th=[ 2474], 00:09:37.708 | 30.00th=[ 2573], 40.00th=[ 2671], 50.00th=[ 2737], 60.00th=[ 2868], 00:09:37.708 | 70.00th=[ 3130], 80.00th=[ 3818], 90.00th=[ 4883], 95.00th=[ 5473], 00:09:37.708 | 99.00th=[ 6652], 99.50th=[ 7111], 99.90th=[ 8586], 99.95th=[ 9765], 00:09:37.708 | 99.99th=[10683] 00:09:37.708 bw ( KiB/s): min=79552, max=82624, per=100.00%, avg=80797.33, stdev=1616.40, samples=3 00:09:37.708 iops : min=19888, max=20656, avg=20199.33, stdev=404.10, samples=3 00:09:37.708 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:09:37.708 lat (msec) : 2=0.34%, 4=81.19%, 10=18.40%, 20=0.04% 00:09:37.708 cpu : usr=99.00%, sys=0.05%, ctx=4, majf=0, minf=627 00:09:37.708 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:37.708 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:37.708 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:37.708 issued rwts: total=40328,40230,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:37.708 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:37.708 00:09:37.708 Run status group 0 (all jobs): 00:09:37.708 READ: bw=78.7MiB/s (82.6MB/s), 78.7MiB/s-78.7MiB/s (82.6MB/s-82.6MB/s), io=158MiB (165MB), run=2001-2001msec 00:09:37.708 WRITE: bw=78.5MiB/s (82.3MB/s), 78.5MiB/s-78.5MiB/s (82.3MB/s-82.3MB/s), io=157MiB (165MB), run=2001-2001msec 00:09:37.708 ----------------------------------------------------- 00:09:37.708 Suppressions used: 00:09:37.708 count bytes template 00:09:37.708 1 32 /usr/src/fio/parse.c 00:09:37.708 1 8 libtcmalloc_minimal.so 00:09:37.708 ----------------------------------------------------- 00:09:37.708 00:09:37.708 05:11:30 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:37.708 05:11:30 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:37.708 05:11:30 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:37.708 05:11:30 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:37.708 05:11:30 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:37.708 05:11:30 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:37.708 05:11:30 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:37.708 05:11:30 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:37.708 05:11:30 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:37.708 05:11:30 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:09:37.708 05:11:30 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:37.708 05:11:30 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:09:37.708 05:11:30 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:37.708 05:11:30 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:09:37.708 05:11:30 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:09:37.708 05:11:30 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:37.708 05:11:30 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:09:37.708 05:11:30 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:37.708 05:11:30 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:37.708 05:11:30 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:37.708 05:11:30 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:37.708 05:11:30 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:09:37.708 05:11:30 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:37.708 05:11:30 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:37.708 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:37.708 fio-3.35 00:09:37.708 Starting 1 thread 00:09:44.278 00:09:44.278 test: (groupid=0, jobs=1): err= 0: pid=76484: Sun Nov 10 05:11:36 2024 00:09:44.278 read: IOPS=19.9k, BW=77.8MiB/s (81.6MB/s)(156MiB/2001msec) 00:09:44.278 slat (nsec): min=3360, max=87789, avg=5201.64, stdev=2533.36 00:09:44.278 clat (usec): min=292, max=9054, avg=3197.07, stdev=1076.71 00:09:44.278 lat (usec): min=297, max=9103, avg=3202.27, stdev=1077.82 00:09:44.278 clat percentiles (usec): 00:09:44.278 | 1.00th=[ 1926], 5.00th=[ 2212], 10.00th=[ 2343], 20.00th=[ 2442], 00:09:44.278 | 30.00th=[ 2540], 40.00th=[ 2638], 50.00th=[ 2769], 60.00th=[ 2933], 00:09:44.278 | 70.00th=[ 3261], 80.00th=[ 3982], 90.00th=[ 4883], 95.00th=[ 5669], 00:09:44.278 | 99.00th=[ 6652], 99.50th=[ 6849], 99.90th=[ 7373], 99.95th=[ 7635], 00:09:44.278 | 99.99th=[ 8717] 00:09:44.278 bw ( KiB/s): min=72944, max=80128, per=97.30%, avg=77528.00, stdev=3981.79, samples=3 00:09:44.279 iops : min=18236, max=20032, avg=19382.00, stdev=995.45, samples=3 00:09:44.279 write: IOPS=19.9k, BW=77.6MiB/s (81.4MB/s)(155MiB/2001msec); 0 zone resets 00:09:44.279 slat (usec): min=3, max=137, avg= 5.46, stdev= 2.68 00:09:44.279 clat (usec): min=263, max=8854, avg=3215.49, stdev=1087.94 00:09:44.279 lat (usec): min=269, max=8868, avg=3220.95, stdev=1089.08 00:09:44.279 clat percentiles (usec): 00:09:44.279 | 1.00th=[ 1926], 5.00th=[ 2245], 10.00th=[ 2343], 20.00th=[ 2474], 00:09:44.279 | 30.00th=[ 2540], 40.00th=[ 2638], 50.00th=[ 2769], 60.00th=[ 2933], 00:09:44.279 | 70.00th=[ 3294], 80.00th=[ 3982], 90.00th=[ 4948], 95.00th=[ 5669], 00:09:44.279 | 99.00th=[ 6718], 99.50th=[ 6915], 99.90th=[ 7373], 99.95th=[ 7635], 00:09:44.279 | 99.99th=[ 8586] 00:09:44.279 bw ( KiB/s): min=73144, max=80008, per=97.72%, avg=77698.67, stdev=3944.59, samples=3 00:09:44.279 iops : min=18284, max=20002, avg=19424.67, stdev=987.87, samples=3 00:09:44.279 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.02% 00:09:44.279 lat (msec) : 2=1.33%, 4=78.86%, 10=19.77% 00:09:44.279 cpu : usr=98.85%, sys=0.15%, ctx=27, majf=0, minf=626 00:09:44.279 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:44.279 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.279 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:44.279 issued rwts: total=39860,39774,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:44.279 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:44.279 00:09:44.279 Run status group 0 (all jobs): 00:09:44.279 READ: bw=77.8MiB/s (81.6MB/s), 77.8MiB/s-77.8MiB/s (81.6MB/s-81.6MB/s), io=156MiB (163MB), run=2001-2001msec 00:09:44.279 WRITE: bw=77.6MiB/s (81.4MB/s), 77.6MiB/s-77.6MiB/s (81.4MB/s-81.4MB/s), io=155MiB (163MB), run=2001-2001msec 00:09:44.279 ----------------------------------------------------- 00:09:44.279 Suppressions used: 00:09:44.279 count bytes template 00:09:44.279 1 32 /usr/src/fio/parse.c 00:09:44.279 1 8 libtcmalloc_minimal.so 00:09:44.279 ----------------------------------------------------- 00:09:44.279 00:09:44.279 05:11:36 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:44.279 05:11:36 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:44.279 05:11:36 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:44.279 05:11:36 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:44.279 05:11:36 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:44.279 05:11:36 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:44.279 05:11:37 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:44.279 05:11:37 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:44.279 05:11:37 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:44.279 05:11:37 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:09:44.279 05:11:37 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:44.279 05:11:37 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:09:44.279 05:11:37 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:44.279 05:11:37 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:09:44.279 05:11:37 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:09:44.279 05:11:37 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:44.279 05:11:37 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:09:44.279 05:11:37 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:44.279 05:11:37 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:44.279 05:11:37 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:44.279 05:11:37 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:44.279 05:11:37 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:09:44.279 05:11:37 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:44.279 05:11:37 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:44.279 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:44.279 fio-3.35 00:09:44.279 Starting 1 thread 00:09:49.567 00:09:49.567 test: (groupid=0, jobs=1): err= 0: pid=76545: Sun Nov 10 05:11:42 2024 00:09:49.567 read: IOPS=19.3k, BW=75.5MiB/s (79.2MB/s)(151MiB/2001msec) 00:09:49.567 slat (nsec): min=3365, max=61561, avg=5524.45, stdev=2930.29 00:09:49.567 clat (usec): min=547, max=11430, avg=3287.52, stdev=1247.95 00:09:49.567 lat (usec): min=557, max=11490, avg=3293.04, stdev=1249.40 00:09:49.567 clat percentiles (usec): 00:09:49.567 | 1.00th=[ 1860], 5.00th=[ 2147], 10.00th=[ 2278], 20.00th=[ 2442], 00:09:49.567 | 30.00th=[ 2540], 40.00th=[ 2671], 50.00th=[ 2802], 60.00th=[ 2966], 00:09:49.567 | 70.00th=[ 3294], 80.00th=[ 4113], 90.00th=[ 5276], 95.00th=[ 5997], 00:09:49.567 | 99.00th=[ 7504], 99.50th=[ 7832], 99.90th=[ 8356], 99.95th=[ 8979], 00:09:49.567 | 99.99th=[11338] 00:09:49.567 bw ( KiB/s): min=71064, max=86064, per=100.00%, avg=78946.67, stdev=7529.23, samples=3 00:09:49.567 iops : min=17766, max=21516, avg=19736.67, stdev=1882.31, samples=3 00:09:49.567 write: IOPS=19.3k, BW=75.4MiB/s (79.0MB/s)(151MiB/2001msec); 0 zone resets 00:09:49.567 slat (nsec): min=3530, max=94795, avg=5750.23, stdev=2984.70 00:09:49.567 clat (usec): min=510, max=11336, avg=3318.97, stdev=1263.93 00:09:49.567 lat (usec): min=520, max=11352, avg=3324.72, stdev=1265.39 00:09:49.567 clat percentiles (usec): 00:09:49.567 | 1.00th=[ 1876], 5.00th=[ 2180], 10.00th=[ 2278], 20.00th=[ 2442], 00:09:49.567 | 30.00th=[ 2573], 40.00th=[ 2704], 50.00th=[ 2835], 60.00th=[ 2999], 00:09:49.567 | 70.00th=[ 3326], 80.00th=[ 4228], 90.00th=[ 5342], 95.00th=[ 6063], 00:09:49.567 | 99.00th=[ 7504], 99.50th=[ 7832], 99.90th=[ 8356], 99.95th=[ 9110], 00:09:49.567 | 99.99th=[11207] 00:09:49.567 bw ( KiB/s): min=71152, max=85720, per=100.00%, avg=79058.67, stdev=7363.41, samples=3 00:09:49.567 iops : min=17788, max=21430, avg=19764.67, stdev=1840.85, samples=3 00:09:49.567 lat (usec) : 750=0.01%, 1000=0.01% 00:09:49.567 lat (msec) : 2=2.01%, 4=76.43%, 10=21.50%, 20=0.04% 00:09:49.567 cpu : usr=99.00%, sys=0.05%, ctx=3, majf=0, minf=626 00:09:49.567 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:49.567 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.567 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:49.567 issued rwts: total=38681,38606,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:49.567 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:49.567 00:09:49.567 Run status group 0 (all jobs): 00:09:49.567 READ: bw=75.5MiB/s (79.2MB/s), 75.5MiB/s-75.5MiB/s (79.2MB/s-79.2MB/s), io=151MiB (158MB), run=2001-2001msec 00:09:49.567 WRITE: bw=75.4MiB/s (79.0MB/s), 75.4MiB/s-75.4MiB/s (79.0MB/s-79.0MB/s), io=151MiB (158MB), run=2001-2001msec 00:09:49.567 ----------------------------------------------------- 00:09:49.567 Suppressions used: 00:09:49.567 count bytes template 00:09:49.567 1 32 /usr/src/fio/parse.c 00:09:49.567 1 8 libtcmalloc_minimal.so 00:09:49.567 ----------------------------------------------------- 00:09:49.567 00:09:49.567 ************************************ 00:09:49.567 END TEST nvme_fio 00:09:49.567 ************************************ 00:09:49.567 05:11:42 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:49.567 05:11:42 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:09:49.567 00:09:49.568 real 0m25.615s 00:09:49.568 user 0m15.988s 00:09:49.568 sys 0m17.253s 00:09:49.568 05:11:42 nvme.nvme_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:49.568 05:11:42 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:09:49.568 00:09:49.568 real 1m32.529s 00:09:49.568 user 3m30.151s 00:09:49.568 sys 0m27.308s 00:09:49.568 ************************************ 00:09:49.568 END TEST nvme 00:09:49.568 ************************************ 00:09:49.568 05:11:42 nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:49.568 05:11:42 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:49.568 05:11:42 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:09:49.568 05:11:42 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:49.568 05:11:42 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:49.568 05:11:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:49.568 05:11:42 -- common/autotest_common.sh@10 -- # set +x 00:09:49.568 ************************************ 00:09:49.568 START TEST nvme_scc 00:09:49.568 ************************************ 00:09:49.568 05:11:42 nvme_scc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:49.568 * Looking for test storage... 00:09:49.568 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:49.568 05:11:42 nvme_scc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:49.568 05:11:42 nvme_scc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:49.568 05:11:42 nvme_scc -- common/autotest_common.sh@1681 -- # lcov --version 00:09:49.829 05:11:42 nvme_scc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:49.829 05:11:42 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:49.829 05:11:42 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:49.830 05:11:42 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:49.830 05:11:42 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:09:49.830 05:11:42 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:09:49.830 05:11:42 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:09:49.830 05:11:42 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:09:49.830 05:11:42 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:09:49.830 05:11:42 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:09:49.830 05:11:42 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:09:49.830 05:11:42 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:49.830 05:11:42 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:09:49.830 05:11:42 nvme_scc -- scripts/common.sh@345 -- # : 1 00:09:49.830 05:11:42 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:49.830 05:11:42 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:49.830 05:11:42 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:09:49.830 05:11:42 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:09:49.830 05:11:42 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:49.830 05:11:42 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:09:49.830 05:11:42 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:49.830 05:11:42 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:09:49.830 05:11:42 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:09:49.830 05:11:42 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:49.830 05:11:42 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:09:49.830 05:11:42 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:49.830 05:11:42 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:49.830 05:11:42 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:49.830 05:11:42 nvme_scc -- scripts/common.sh@368 -- # return 0 00:09:49.830 05:11:42 nvme_scc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:49.830 05:11:42 nvme_scc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:49.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.830 --rc genhtml_branch_coverage=1 00:09:49.830 --rc genhtml_function_coverage=1 00:09:49.830 --rc genhtml_legend=1 00:09:49.830 --rc geninfo_all_blocks=1 00:09:49.830 --rc geninfo_unexecuted_blocks=1 00:09:49.830 00:09:49.830 ' 00:09:49.830 05:11:42 nvme_scc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:49.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.830 --rc genhtml_branch_coverage=1 00:09:49.830 --rc genhtml_function_coverage=1 00:09:49.830 --rc genhtml_legend=1 00:09:49.830 --rc geninfo_all_blocks=1 00:09:49.830 --rc geninfo_unexecuted_blocks=1 00:09:49.830 00:09:49.830 ' 00:09:49.830 05:11:42 nvme_scc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:49.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.830 --rc genhtml_branch_coverage=1 00:09:49.830 --rc genhtml_function_coverage=1 00:09:49.830 --rc genhtml_legend=1 00:09:49.830 --rc geninfo_all_blocks=1 00:09:49.830 --rc geninfo_unexecuted_blocks=1 00:09:49.830 00:09:49.830 ' 00:09:49.830 05:11:42 nvme_scc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:49.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.830 --rc genhtml_branch_coverage=1 00:09:49.830 --rc genhtml_function_coverage=1 00:09:49.830 --rc genhtml_legend=1 00:09:49.830 --rc geninfo_all_blocks=1 00:09:49.830 --rc geninfo_unexecuted_blocks=1 00:09:49.830 00:09:49.830 ' 00:09:49.830 05:11:42 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:49.830 05:11:42 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:49.830 05:11:42 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:49.830 05:11:42 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:49.830 05:11:42 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:49.830 05:11:42 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:09:49.830 05:11:42 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:49.830 05:11:42 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:49.830 05:11:42 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:49.830 05:11:42 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.830 05:11:42 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.830 05:11:42 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.830 05:11:42 nvme_scc -- paths/export.sh@5 -- # export PATH 00:09:49.830 05:11:42 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.830 05:11:42 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:09:49.830 05:11:42 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:49.830 05:11:42 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:09:49.830 05:11:42 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:49.830 05:11:42 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:09:49.830 05:11:42 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:49.830 05:11:42 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:49.830 05:11:42 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:49.830 05:11:42 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:09:49.830 05:11:42 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:49.830 05:11:42 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:09:49.830 05:11:42 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:09:49.830 05:11:42 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:09:49.830 05:11:42 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:50.092 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:50.353 Waiting for block devices as requested 00:09:50.353 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:50.353 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:50.353 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:50.615 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:55.923 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:55.923 05:11:48 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:09:55.923 05:11:48 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:55.923 05:11:48 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:55.923 05:11:48 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:55.923 05:11:48 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:55.923 05:11:48 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:55.923 05:11:48 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:55.923 05:11:48 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:55.923 05:11:48 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:55.923 05:11:48 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:55.923 05:11:48 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:55.923 05:11:48 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:55.923 05:11:48 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:55.923 05:11:48 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:55.923 05:11:48 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:55.923 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.923 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.923 05:11:48 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:55.923 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:55.923 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.923 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.923 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:55.923 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:55.923 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:55.923 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.923 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.923 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:55.923 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:55.923 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:55.923 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.923 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.923 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:55.923 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:55.923 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:55.923 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.923 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.923 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:55.923 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:55.923 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:55.923 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.923 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.923 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:55.923 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:55.923 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:55.923 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.923 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.923 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:55.923 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:55.923 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:55.923 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.923 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.923 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:55.923 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:55.923 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:55.923 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.923 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.923 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.923 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:55.923 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:55.923 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.923 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.923 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:55.923 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:55.923 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:55.923 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.923 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.923 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.923 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:55.923 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:55.923 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.923 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.923 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:55.923 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:55.923 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:55.923 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.924 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.925 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:55.926 05:11:48 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.927 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:55.928 05:11:48 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:55.928 05:11:48 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:55.928 05:11:48 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:55.929 05:11:48 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:55.929 05:11:48 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.929 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.930 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:55.931 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:55.932 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:55.933 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:55.934 05:11:48 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:55.934 05:11:48 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:55.934 05:11:48 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:55.934 05:11:48 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.934 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.935 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.936 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:55.937 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:55.938 05:11:48 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:55.939 05:11:48 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:55.939 05:11:48 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:55.939 05:11:48 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:55.939 05:11:48 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:55.939 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.939 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.939 05:11:48 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:55.939 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:55.939 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.939 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.939 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:55.939 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:55.939 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:55.939 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.939 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.939 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:55.939 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:55.939 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:55.939 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.939 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.939 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:55.939 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:55.939 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:55.939 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.939 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.939 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:55.939 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:55.939 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:55.939 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.939 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.939 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:55.939 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:55.939 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:55.939 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.939 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.939 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:55.939 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:55.939 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:55.939 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.939 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.939 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:55.939 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:55.939 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:55.939 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.939 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.939 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:55.939 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:55.939 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:55.939 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.939 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.939 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.939 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:55.939 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:55.939 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.939 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.939 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.939 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:55.939 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:55.939 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.939 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.939 05:11:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.939 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:55.939 05:11:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:55.939 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.939 05:11:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.939 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.939 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:55.939 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:55.939 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.939 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.939 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:55.939 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:55.939 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:55.939 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.939 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.939 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.939 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:55.939 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:55.939 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.939 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.939 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.939 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:55.939 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:55.939 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.939 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.939 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.939 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:55.939 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:55.939 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.939 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.939 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.939 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:55.939 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:55.939 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.939 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.939 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.939 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:55.939 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:55.939 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.939 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.939 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.939 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:55.939 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:55.939 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.939 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.939 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.939 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:55.939 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:55.939 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.939 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.939 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.939 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:55.939 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:55.939 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.939 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.939 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:55.940 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.941 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:55.942 05:11:49 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:55.942 05:11:49 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:55.942 05:11:49 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:55.942 05:11:49 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:55.942 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.943 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.944 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:55.945 05:11:49 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:55.946 05:11:49 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:55.946 05:11:49 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:09:55.946 05:11:49 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:09:55.946 05:11:49 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:55.946 05:11:49 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:09:55.946 05:11:49 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:55.946 05:11:49 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:09:55.946 05:11:49 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:09:55.946 05:11:49 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:55.946 05:11:49 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:55.946 05:11:49 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:09:55.946 05:11:49 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:09:55.946 05:11:49 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:09:55.946 05:11:49 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:09:55.946 05:11:49 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:09:55.946 05:11:49 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:09:55.946 05:11:49 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:55.946 05:11:49 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:55.946 05:11:49 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:55.946 05:11:49 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:55.946 05:11:49 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:55.946 05:11:49 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:55.946 05:11:49 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:09:55.946 05:11:49 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:55.946 05:11:49 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:09:55.946 05:11:49 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:09:55.946 05:11:49 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:09:55.946 05:11:49 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:09:55.946 05:11:49 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:09:55.946 05:11:49 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:09:55.946 05:11:49 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:55.946 05:11:49 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:55.946 05:11:49 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:55.946 05:11:49 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:55.946 05:11:49 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:55.946 05:11:49 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:55.946 05:11:49 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:09:55.946 05:11:49 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:55.946 05:11:49 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:09:55.946 05:11:49 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:09:55.946 05:11:49 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:09:55.946 05:11:49 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:09:55.946 05:11:49 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:09:55.946 05:11:49 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:09:55.946 05:11:49 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:55.946 05:11:49 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:55.946 05:11:49 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:55.946 05:11:49 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:55.946 05:11:49 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:55.946 05:11:49 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:55.946 05:11:49 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:09:55.946 05:11:49 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:55.946 05:11:49 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:09:55.946 05:11:49 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:09:55.946 05:11:49 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:09:55.946 05:11:49 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:09:55.946 05:11:49 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:09:55.946 05:11:49 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:09:55.946 05:11:49 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:55.946 05:11:49 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:55.946 05:11:49 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:55.946 05:11:49 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:55.946 05:11:49 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:55.946 05:11:49 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:55.946 05:11:49 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:09:55.946 05:11:49 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:09:55.946 05:11:49 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:09:55.946 05:11:49 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:09:55.946 05:11:49 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:09:55.946 05:11:49 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:09:55.946 05:11:49 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:56.518 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:57.091 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:57.091 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:57.091 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:57.091 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:57.091 05:11:50 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:57.091 05:11:50 nvme_scc -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:57.091 05:11:50 nvme_scc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:57.091 05:11:50 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:57.091 ************************************ 00:09:57.091 START TEST nvme_simple_copy 00:09:57.091 ************************************ 00:09:57.091 05:11:50 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:57.366 Initializing NVMe Controllers 00:09:57.366 Attaching to 0000:00:10.0 00:09:57.366 Controller supports SCC. Attached to 0000:00:10.0 00:09:57.366 Namespace ID: 1 size: 6GB 00:09:57.366 Initialization complete. 00:09:57.366 00:09:57.366 Controller QEMU NVMe Ctrl (12340 ) 00:09:57.366 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:09:57.366 Namespace Block Size:4096 00:09:57.366 Writing LBAs 0 to 63 with Random Data 00:09:57.366 Copied LBAs from 0 - 63 to the Destination LBA 256 00:09:57.366 LBAs matching Written Data: 64 00:09:57.366 00:09:57.366 ************************************ 00:09:57.366 END TEST nvme_simple_copy 00:09:57.366 ************************************ 00:09:57.366 real 0m0.271s 00:09:57.366 user 0m0.100s 00:09:57.366 sys 0m0.068s 00:09:57.366 05:11:50 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:57.366 05:11:50 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:09:57.626 ************************************ 00:09:57.626 END TEST nvme_scc 00:09:57.626 ************************************ 00:09:57.626 00:09:57.626 real 0m7.946s 00:09:57.626 user 0m1.095s 00:09:57.626 sys 0m1.501s 00:09:57.626 05:11:50 nvme_scc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:57.626 05:11:50 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:57.626 05:11:50 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:09:57.626 05:11:50 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:09:57.626 05:11:50 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:09:57.626 05:11:50 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:09:57.626 05:11:50 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:09:57.626 05:11:50 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:57.626 05:11:50 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:57.626 05:11:50 -- common/autotest_common.sh@10 -- # set +x 00:09:57.626 ************************************ 00:09:57.626 START TEST nvme_fdp 00:09:57.626 ************************************ 00:09:57.626 05:11:50 nvme_fdp -- common/autotest_common.sh@1125 -- # test/nvme/nvme_fdp.sh 00:09:57.626 * Looking for test storage... 00:09:57.626 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:57.626 05:11:50 nvme_fdp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:57.626 05:11:50 nvme_fdp -- common/autotest_common.sh@1681 -- # lcov --version 00:09:57.626 05:11:50 nvme_fdp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:57.626 05:11:50 nvme_fdp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:57.626 05:11:50 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:57.626 05:11:50 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:57.626 05:11:50 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:57.626 05:11:50 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:09:57.626 05:11:50 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:09:57.626 05:11:50 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:09:57.626 05:11:50 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:09:57.626 05:11:50 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:09:57.626 05:11:50 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:09:57.626 05:11:50 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:09:57.626 05:11:50 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:57.626 05:11:50 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:09:57.626 05:11:50 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:09:57.626 05:11:50 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:57.626 05:11:50 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:57.626 05:11:50 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:09:57.626 05:11:50 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:09:57.626 05:11:50 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:57.626 05:11:50 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:09:57.626 05:11:50 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:57.626 05:11:50 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:09:57.626 05:11:50 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:09:57.626 05:11:50 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:57.626 05:11:50 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:09:57.626 05:11:50 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:57.626 05:11:50 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:57.626 05:11:50 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:57.626 05:11:50 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:09:57.626 05:11:50 nvme_fdp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:57.626 05:11:50 nvme_fdp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:57.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.627 --rc genhtml_branch_coverage=1 00:09:57.627 --rc genhtml_function_coverage=1 00:09:57.627 --rc genhtml_legend=1 00:09:57.627 --rc geninfo_all_blocks=1 00:09:57.627 --rc geninfo_unexecuted_blocks=1 00:09:57.627 00:09:57.627 ' 00:09:57.627 05:11:50 nvme_fdp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:57.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.627 --rc genhtml_branch_coverage=1 00:09:57.627 --rc genhtml_function_coverage=1 00:09:57.627 --rc genhtml_legend=1 00:09:57.627 --rc geninfo_all_blocks=1 00:09:57.627 --rc geninfo_unexecuted_blocks=1 00:09:57.627 00:09:57.627 ' 00:09:57.627 05:11:50 nvme_fdp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:57.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.627 --rc genhtml_branch_coverage=1 00:09:57.627 --rc genhtml_function_coverage=1 00:09:57.627 --rc genhtml_legend=1 00:09:57.627 --rc geninfo_all_blocks=1 00:09:57.627 --rc geninfo_unexecuted_blocks=1 00:09:57.627 00:09:57.627 ' 00:09:57.627 05:11:50 nvme_fdp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:57.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.627 --rc genhtml_branch_coverage=1 00:09:57.627 --rc genhtml_function_coverage=1 00:09:57.627 --rc genhtml_legend=1 00:09:57.627 --rc geninfo_all_blocks=1 00:09:57.627 --rc geninfo_unexecuted_blocks=1 00:09:57.627 00:09:57.627 ' 00:09:57.627 05:11:50 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:57.887 05:11:50 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:57.887 05:11:50 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:57.887 05:11:50 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:57.887 05:11:50 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:57.887 05:11:50 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:09:57.887 05:11:50 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:57.887 05:11:50 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:57.887 05:11:50 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:57.887 05:11:50 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.887 05:11:50 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.887 05:11:50 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.887 05:11:50 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:09:57.887 05:11:50 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.887 05:11:50 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:09:57.887 05:11:50 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:57.887 05:11:50 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:09:57.887 05:11:50 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:57.887 05:11:50 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:09:57.887 05:11:50 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:57.887 05:11:50 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:57.887 05:11:50 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:57.887 05:11:50 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:09:57.887 05:11:50 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:57.887 05:11:50 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:58.148 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:58.148 Waiting for block devices as requested 00:09:58.148 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:58.408 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:58.408 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:58.667 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:03.949 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:03.949 05:11:56 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:10:03.949 05:11:56 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:03.949 05:11:56 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:03.949 05:11:56 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:03.949 05:11:56 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:10:03.949 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:10:03.950 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.951 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:03.952 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:10:03.953 05:11:56 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:03.953 05:11:56 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:03.953 05:11:56 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:03.953 05:11:56 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.953 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.954 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:10:03.955 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:10:03.956 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:03.957 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:10:03.958 05:11:56 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:03.958 05:11:56 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:03.958 05:11:56 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:03.958 05:11:56 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.958 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.959 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:10:03.960 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.961 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:10:03.962 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.963 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.964 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.965 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:10:03.966 05:11:56 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:03.966 05:11:56 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:03.966 05:11:56 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:03.966 05:11:56 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.966 05:11:56 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.966 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:10:03.966 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:10:03.966 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.966 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.966 05:11:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.966 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:10:03.966 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:10:03.966 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:10:03.967 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.968 05:11:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:10:03.969 05:11:57 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:10:03.969 05:11:57 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:10:03.969 05:11:57 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:10:03.969 05:11:57 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:10:03.969 05:11:57 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:04.542 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:05.112 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:05.112 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:05.112 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:05.112 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:05.112 05:11:58 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:10:05.112 05:11:58 nvme_fdp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:05.112 05:11:58 nvme_fdp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:05.112 05:11:58 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:10:05.112 ************************************ 00:10:05.112 START TEST nvme_flexible_data_placement 00:10:05.112 ************************************ 00:10:05.112 05:11:58 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:10:05.374 Initializing NVMe Controllers 00:10:05.374 Attaching to 0000:00:13.0 00:10:05.374 Controller supports FDP Attached to 0000:00:13.0 00:10:05.374 Namespace ID: 1 Endurance Group ID: 1 00:10:05.374 Initialization complete. 00:10:05.374 00:10:05.374 ================================== 00:10:05.374 == FDP tests for Namespace: #01 == 00:10:05.374 ================================== 00:10:05.374 00:10:05.374 Get Feature: FDP: 00:10:05.374 ================= 00:10:05.374 Enabled: Yes 00:10:05.374 FDP configuration Index: 0 00:10:05.374 00:10:05.374 FDP configurations log page 00:10:05.374 =========================== 00:10:05.374 Number of FDP configurations: 1 00:10:05.374 Version: 0 00:10:05.374 Size: 112 00:10:05.374 FDP Configuration Descriptor: 0 00:10:05.374 Descriptor Size: 96 00:10:05.374 Reclaim Group Identifier format: 2 00:10:05.374 FDP Volatile Write Cache: Not Present 00:10:05.374 FDP Configuration: Valid 00:10:05.374 Vendor Specific Size: 0 00:10:05.374 Number of Reclaim Groups: 2 00:10:05.374 Number of Recalim Unit Handles: 8 00:10:05.374 Max Placement Identifiers: 128 00:10:05.374 Number of Namespaces Suppprted: 256 00:10:05.374 Reclaim unit Nominal Size: 6000000 bytes 00:10:05.374 Estimated Reclaim Unit Time Limit: Not Reported 00:10:05.374 RUH Desc #000: RUH Type: Initially Isolated 00:10:05.374 RUH Desc #001: RUH Type: Initially Isolated 00:10:05.374 RUH Desc #002: RUH Type: Initially Isolated 00:10:05.374 RUH Desc #003: RUH Type: Initially Isolated 00:10:05.374 RUH Desc #004: RUH Type: Initially Isolated 00:10:05.374 RUH Desc #005: RUH Type: Initially Isolated 00:10:05.374 RUH Desc #006: RUH Type: Initially Isolated 00:10:05.374 RUH Desc #007: RUH Type: Initially Isolated 00:10:05.374 00:10:05.374 FDP reclaim unit handle usage log page 00:10:05.374 ====================================== 00:10:05.374 Number of Reclaim Unit Handles: 8 00:10:05.374 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:10:05.374 RUH Usage Desc #001: RUH Attributes: Unused 00:10:05.374 RUH Usage Desc #002: RUH Attributes: Unused 00:10:05.374 RUH Usage Desc #003: RUH Attributes: Unused 00:10:05.374 RUH Usage Desc #004: RUH Attributes: Unused 00:10:05.374 RUH Usage Desc #005: RUH Attributes: Unused 00:10:05.374 RUH Usage Desc #006: RUH Attributes: Unused 00:10:05.374 RUH Usage Desc #007: RUH Attributes: Unused 00:10:05.374 00:10:05.374 FDP statistics log page 00:10:05.374 ======================= 00:10:05.374 Host bytes with metadata written: 2176684032 00:10:05.374 Media bytes with metadata written: 2179416064 00:10:05.374 Media bytes erased: 0 00:10:05.374 00:10:05.374 FDP Reclaim unit handle status 00:10:05.374 ============================== 00:10:05.374 Number of RUHS descriptors: 2 00:10:05.374 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000002427 00:10:05.374 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:10:05.374 00:10:05.374 FDP write on placement id: 0 success 00:10:05.374 00:10:05.374 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:10:05.374 00:10:05.374 IO mgmt send: RUH update for Placement ID: #0 Success 00:10:05.374 00:10:05.374 Get Feature: FDP Events for Placement handle: #0 00:10:05.374 ======================== 00:10:05.374 Number of FDP Events: 6 00:10:05.374 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:10:05.374 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:10:05.374 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:10:05.374 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:10:05.374 FDP Event: #4 Type: Media Reallocated Enabled: No 00:10:05.374 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:10:05.374 00:10:05.374 FDP events log page 00:10:05.374 =================== 00:10:05.374 Number of FDP events: 1 00:10:05.374 FDP Event #0: 00:10:05.374 Event Type: RU Not Written to Capacity 00:10:05.374 Placement Identifier: Valid 00:10:05.374 NSID: Valid 00:10:05.374 Location: Valid 00:10:05.374 Placement Identifier: 0 00:10:05.374 Event Timestamp: 4 00:10:05.374 Namespace Identifier: 1 00:10:05.374 Reclaim Group Identifier: 0 00:10:05.374 Reclaim Unit Handle Identifier: 0 00:10:05.374 00:10:05.374 FDP test passed 00:10:05.374 00:10:05.374 real 0m0.228s 00:10:05.374 user 0m0.061s 00:10:05.374 sys 0m0.065s 00:10:05.374 05:11:58 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:05.374 ************************************ 00:10:05.374 05:11:58 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:10:05.374 END TEST nvme_flexible_data_placement 00:10:05.374 ************************************ 00:10:05.374 00:10:05.374 real 0m7.815s 00:10:05.374 user 0m1.055s 00:10:05.374 sys 0m1.478s 00:10:05.374 05:11:58 nvme_fdp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:05.374 ************************************ 00:10:05.374 END TEST nvme_fdp 00:10:05.374 ************************************ 00:10:05.374 05:11:58 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:10:05.374 05:11:58 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:10:05.374 05:11:58 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:10:05.374 05:11:58 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:05.374 05:11:58 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:05.374 05:11:58 -- common/autotest_common.sh@10 -- # set +x 00:10:05.374 ************************************ 00:10:05.374 START TEST nvme_rpc 00:10:05.374 ************************************ 00:10:05.374 05:11:58 nvme_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:10:05.665 * Looking for test storage... 00:10:05.665 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:05.665 05:11:58 nvme_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:05.665 05:11:58 nvme_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:10:05.665 05:11:58 nvme_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:05.665 05:11:58 nvme_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:05.665 05:11:58 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:05.665 05:11:58 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:05.665 05:11:58 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:05.665 05:11:58 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:05.665 05:11:58 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:05.665 05:11:58 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:05.665 05:11:58 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:05.665 05:11:58 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:05.665 05:11:58 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:05.665 05:11:58 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:05.665 05:11:58 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:05.665 05:11:58 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:05.665 05:11:58 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:10:05.665 05:11:58 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:05.665 05:11:58 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:05.665 05:11:58 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:10:05.665 05:11:58 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:10:05.665 05:11:58 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:05.665 05:11:58 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:10:05.665 05:11:58 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:05.665 05:11:58 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:10:05.665 05:11:58 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:10:05.665 05:11:58 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:05.665 05:11:58 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:10:05.665 05:11:58 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:05.665 05:11:58 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:05.665 05:11:58 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:05.665 05:11:58 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:10:05.665 05:11:58 nvme_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:05.665 05:11:58 nvme_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:05.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.665 --rc genhtml_branch_coverage=1 00:10:05.665 --rc genhtml_function_coverage=1 00:10:05.665 --rc genhtml_legend=1 00:10:05.665 --rc geninfo_all_blocks=1 00:10:05.665 --rc geninfo_unexecuted_blocks=1 00:10:05.665 00:10:05.665 ' 00:10:05.665 05:11:58 nvme_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:05.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.665 --rc genhtml_branch_coverage=1 00:10:05.665 --rc genhtml_function_coverage=1 00:10:05.665 --rc genhtml_legend=1 00:10:05.665 --rc geninfo_all_blocks=1 00:10:05.665 --rc geninfo_unexecuted_blocks=1 00:10:05.665 00:10:05.665 ' 00:10:05.665 05:11:58 nvme_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:05.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.665 --rc genhtml_branch_coverage=1 00:10:05.665 --rc genhtml_function_coverage=1 00:10:05.665 --rc genhtml_legend=1 00:10:05.665 --rc geninfo_all_blocks=1 00:10:05.665 --rc geninfo_unexecuted_blocks=1 00:10:05.665 00:10:05.665 ' 00:10:05.665 05:11:58 nvme_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:05.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.665 --rc genhtml_branch_coverage=1 00:10:05.665 --rc genhtml_function_coverage=1 00:10:05.665 --rc genhtml_legend=1 00:10:05.665 --rc geninfo_all_blocks=1 00:10:05.665 --rc geninfo_unexecuted_blocks=1 00:10:05.665 00:10:05.665 ' 00:10:05.665 05:11:58 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:05.665 05:11:58 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:10:05.665 05:11:58 nvme_rpc -- common/autotest_common.sh@1507 -- # bdfs=() 00:10:05.665 05:11:58 nvme_rpc -- common/autotest_common.sh@1507 -- # local bdfs 00:10:05.665 05:11:58 nvme_rpc -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:10:05.665 05:11:58 nvme_rpc -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:10:05.665 05:11:58 nvme_rpc -- common/autotest_common.sh@1496 -- # bdfs=() 00:10:05.665 05:11:58 nvme_rpc -- common/autotest_common.sh@1496 -- # local bdfs 00:10:05.665 05:11:58 nvme_rpc -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:05.665 05:11:58 nvme_rpc -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:10:05.665 05:11:58 nvme_rpc -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:05.665 05:11:58 nvme_rpc -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:10:05.665 05:11:58 nvme_rpc -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:05.665 05:11:58 nvme_rpc -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:10:05.665 05:11:58 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:10:05.665 05:11:58 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=77912 00:10:05.665 05:11:58 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:10:05.665 05:11:58 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 77912 00:10:05.665 05:11:58 nvme_rpc -- common/autotest_common.sh@831 -- # '[' -z 77912 ']' 00:10:05.665 05:11:58 nvme_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:05.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:05.665 05:11:58 nvme_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:05.665 05:11:58 nvme_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:05.665 05:11:58 nvme_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:05.665 05:11:58 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:05.666 05:11:58 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:10:05.666 [2024-11-10 05:11:58.866763] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:05.666 [2024-11-10 05:11:58.867415] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77912 ] 00:10:05.926 [2024-11-10 05:11:59.017515] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:05.926 [2024-11-10 05:11:59.066961] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.926 [2024-11-10 05:11:59.067041] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:06.508 05:11:59 nvme_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:06.508 05:11:59 nvme_rpc -- common/autotest_common.sh@864 -- # return 0 00:10:06.508 05:11:59 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:10:06.770 Nvme0n1 00:10:06.770 05:11:59 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:10:06.770 05:11:59 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:10:07.032 request: 00:10:07.032 { 00:10:07.032 "bdev_name": "Nvme0n1", 00:10:07.032 "filename": "non_existing_file", 00:10:07.032 "method": "bdev_nvme_apply_firmware", 00:10:07.032 "req_id": 1 00:10:07.032 } 00:10:07.032 Got JSON-RPC error response 00:10:07.032 response: 00:10:07.032 { 00:10:07.032 "code": -32603, 00:10:07.032 "message": "open file failed." 00:10:07.032 } 00:10:07.032 05:12:00 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:10:07.032 05:12:00 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:10:07.032 05:12:00 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:10:07.291 05:12:00 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:07.291 05:12:00 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 77912 00:10:07.291 05:12:00 nvme_rpc -- common/autotest_common.sh@950 -- # '[' -z 77912 ']' 00:10:07.291 05:12:00 nvme_rpc -- common/autotest_common.sh@954 -- # kill -0 77912 00:10:07.291 05:12:00 nvme_rpc -- common/autotest_common.sh@955 -- # uname 00:10:07.291 05:12:00 nvme_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:07.291 05:12:00 nvme_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77912 00:10:07.291 05:12:00 nvme_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:07.291 05:12:00 nvme_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:07.291 killing process with pid 77912 00:10:07.291 05:12:00 nvme_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77912' 00:10:07.291 05:12:00 nvme_rpc -- common/autotest_common.sh@969 -- # kill 77912 00:10:07.291 05:12:00 nvme_rpc -- common/autotest_common.sh@974 -- # wait 77912 00:10:07.552 00:10:07.552 real 0m2.204s 00:10:07.552 user 0m4.254s 00:10:07.552 sys 0m0.538s 00:10:07.552 05:12:00 nvme_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:07.552 ************************************ 00:10:07.552 05:12:00 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:07.552 END TEST nvme_rpc 00:10:07.552 ************************************ 00:10:07.815 05:12:00 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:10:07.815 05:12:00 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:07.815 05:12:00 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:07.815 05:12:00 -- common/autotest_common.sh@10 -- # set +x 00:10:07.815 ************************************ 00:10:07.815 START TEST nvme_rpc_timeouts 00:10:07.815 ************************************ 00:10:07.815 05:12:00 nvme_rpc_timeouts -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:10:07.815 * Looking for test storage... 00:10:07.815 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:07.815 05:12:00 nvme_rpc_timeouts -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:07.815 05:12:00 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # lcov --version 00:10:07.815 05:12:00 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:07.815 05:12:00 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:07.815 05:12:00 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:07.815 05:12:00 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:07.815 05:12:00 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:07.815 05:12:00 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:10:07.815 05:12:00 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:10:07.815 05:12:00 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:10:07.815 05:12:00 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:10:07.815 05:12:00 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:10:07.815 05:12:00 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:10:07.815 05:12:00 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:10:07.815 05:12:00 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:07.815 05:12:00 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:10:07.815 05:12:00 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:10:07.815 05:12:00 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:07.815 05:12:00 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:07.815 05:12:00 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:10:07.815 05:12:00 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:10:07.815 05:12:00 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:07.815 05:12:00 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:10:07.815 05:12:00 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:10:07.815 05:12:00 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:10:07.815 05:12:00 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:10:07.815 05:12:00 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:07.815 05:12:00 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:10:07.815 05:12:00 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:10:07.815 05:12:00 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:07.815 05:12:00 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:07.815 05:12:00 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:10:07.815 05:12:00 nvme_rpc_timeouts -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:07.815 05:12:00 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:07.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.815 --rc genhtml_branch_coverage=1 00:10:07.815 --rc genhtml_function_coverage=1 00:10:07.815 --rc genhtml_legend=1 00:10:07.815 --rc geninfo_all_blocks=1 00:10:07.815 --rc geninfo_unexecuted_blocks=1 00:10:07.815 00:10:07.815 ' 00:10:07.815 05:12:00 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:07.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.815 --rc genhtml_branch_coverage=1 00:10:07.815 --rc genhtml_function_coverage=1 00:10:07.815 --rc genhtml_legend=1 00:10:07.815 --rc geninfo_all_blocks=1 00:10:07.815 --rc geninfo_unexecuted_blocks=1 00:10:07.815 00:10:07.815 ' 00:10:07.815 05:12:00 nvme_rpc_timeouts -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:07.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.815 --rc genhtml_branch_coverage=1 00:10:07.815 --rc genhtml_function_coverage=1 00:10:07.815 --rc genhtml_legend=1 00:10:07.815 --rc geninfo_all_blocks=1 00:10:07.815 --rc geninfo_unexecuted_blocks=1 00:10:07.815 00:10:07.815 ' 00:10:07.815 05:12:00 nvme_rpc_timeouts -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:07.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.815 --rc genhtml_branch_coverage=1 00:10:07.815 --rc genhtml_function_coverage=1 00:10:07.815 --rc genhtml_legend=1 00:10:07.815 --rc geninfo_all_blocks=1 00:10:07.815 --rc geninfo_unexecuted_blocks=1 00:10:07.815 00:10:07.815 ' 00:10:07.815 05:12:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:07.815 05:12:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_77966 00:10:07.815 05:12:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_77966 00:10:07.815 05:12:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=77998 00:10:07.815 05:12:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:10:07.815 05:12:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 77998 00:10:07.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:07.815 05:12:00 nvme_rpc_timeouts -- common/autotest_common.sh@831 -- # '[' -z 77998 ']' 00:10:07.815 05:12:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:10:07.815 05:12:00 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:07.815 05:12:00 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:07.815 05:12:00 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:07.815 05:12:00 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:07.815 05:12:00 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:10:08.076 [2024-11-10 05:12:01.067471] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:08.076 [2024-11-10 05:12:01.067612] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77998 ] 00:10:08.076 [2024-11-10 05:12:01.217368] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:08.076 [2024-11-10 05:12:01.267653] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:08.076 [2024-11-10 05:12:01.267721] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.020 05:12:01 nvme_rpc_timeouts -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:09.020 05:12:01 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # return 0 00:10:09.020 Checking default timeout settings: 00:10:09.020 05:12:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:10:09.020 05:12:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:09.281 Making settings changes with rpc: 00:10:09.281 05:12:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:10:09.281 05:12:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:10:09.281 Check default vs. modified settings: 00:10:09.281 05:12:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:10:09.281 05:12:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:09.849 05:12:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:10:09.849 05:12:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:09.849 05:12:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_77966 00:10:09.849 05:12:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:09.849 05:12:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:09.849 05:12:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:10:09.849 05:12:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_77966 00:10:09.849 05:12:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:09.849 05:12:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:09.849 Setting action_on_timeout is changed as expected. 00:10:09.849 05:12:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:10:09.849 05:12:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:10:09.849 05:12:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:10:09.849 05:12:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:09.849 05:12:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_77966 00:10:09.849 05:12:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:09.849 05:12:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:09.849 05:12:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:09.850 05:12:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_77966 00:10:09.850 05:12:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:09.850 05:12:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:09.850 Setting timeout_us is changed as expected. 00:10:09.850 05:12:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:10:09.850 05:12:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:10:09.850 05:12:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:10:09.850 05:12:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:09.850 05:12:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_77966 00:10:09.850 05:12:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:09.850 05:12:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:09.850 05:12:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:09.850 05:12:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:09.850 05:12:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_77966 00:10:09.850 05:12:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:09.850 Setting timeout_admin_us is changed as expected. 00:10:09.850 05:12:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:10:09.850 05:12:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:10:09.850 05:12:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:10:09.850 05:12:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:10:09.850 05:12:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_77966 /tmp/settings_modified_77966 00:10:09.850 05:12:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 77998 00:10:09.850 05:12:02 nvme_rpc_timeouts -- common/autotest_common.sh@950 -- # '[' -z 77998 ']' 00:10:09.850 05:12:02 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # kill -0 77998 00:10:09.850 05:12:02 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # uname 00:10:09.850 05:12:02 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:09.850 05:12:02 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77998 00:10:09.850 05:12:02 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:09.850 05:12:02 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:09.850 killing process with pid 77998 00:10:09.850 05:12:02 nvme_rpc_timeouts -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77998' 00:10:09.850 05:12:02 nvme_rpc_timeouts -- common/autotest_common.sh@969 -- # kill 77998 00:10:09.850 05:12:02 nvme_rpc_timeouts -- common/autotest_common.sh@974 -- # wait 77998 00:10:10.108 RPC TIMEOUT SETTING TEST PASSED. 00:10:10.109 05:12:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:10:10.109 00:10:10.109 real 0m2.314s 00:10:10.109 user 0m4.568s 00:10:10.109 sys 0m0.543s 00:10:10.109 05:12:03 nvme_rpc_timeouts -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:10.109 05:12:03 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:10:10.109 ************************************ 00:10:10.109 END TEST nvme_rpc_timeouts 00:10:10.109 ************************************ 00:10:10.109 05:12:03 -- spdk/autotest.sh@239 -- # uname -s 00:10:10.109 05:12:03 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:10:10.109 05:12:03 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:10:10.109 05:12:03 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:10.109 05:12:03 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:10.109 05:12:03 -- common/autotest_common.sh@10 -- # set +x 00:10:10.109 ************************************ 00:10:10.109 START TEST sw_hotplug 00:10:10.109 ************************************ 00:10:10.109 05:12:03 sw_hotplug -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:10:10.109 * Looking for test storage... 00:10:10.109 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:10.109 05:12:03 sw_hotplug -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:10.109 05:12:03 sw_hotplug -- common/autotest_common.sh@1681 -- # lcov --version 00:10:10.109 05:12:03 sw_hotplug -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:10.109 05:12:03 sw_hotplug -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:10.109 05:12:03 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:10.109 05:12:03 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:10.109 05:12:03 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:10.109 05:12:03 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:10:10.109 05:12:03 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:10:10.109 05:12:03 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:10:10.109 05:12:03 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:10:10.109 05:12:03 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:10:10.109 05:12:03 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:10:10.368 05:12:03 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:10:10.368 05:12:03 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:10.368 05:12:03 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:10:10.368 05:12:03 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:10:10.368 05:12:03 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:10.368 05:12:03 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:10.368 05:12:03 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:10:10.368 05:12:03 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:10:10.368 05:12:03 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:10.368 05:12:03 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:10:10.368 05:12:03 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:10:10.368 05:12:03 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:10:10.368 05:12:03 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:10:10.368 05:12:03 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:10.368 05:12:03 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:10:10.368 05:12:03 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:10:10.368 05:12:03 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:10.368 05:12:03 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:10.368 05:12:03 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:10:10.368 05:12:03 sw_hotplug -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:10.368 05:12:03 sw_hotplug -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:10.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.368 --rc genhtml_branch_coverage=1 00:10:10.368 --rc genhtml_function_coverage=1 00:10:10.368 --rc genhtml_legend=1 00:10:10.368 --rc geninfo_all_blocks=1 00:10:10.368 --rc geninfo_unexecuted_blocks=1 00:10:10.368 00:10:10.368 ' 00:10:10.368 05:12:03 sw_hotplug -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:10.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.368 --rc genhtml_branch_coverage=1 00:10:10.368 --rc genhtml_function_coverage=1 00:10:10.368 --rc genhtml_legend=1 00:10:10.368 --rc geninfo_all_blocks=1 00:10:10.368 --rc geninfo_unexecuted_blocks=1 00:10:10.368 00:10:10.368 ' 00:10:10.368 05:12:03 sw_hotplug -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:10.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.368 --rc genhtml_branch_coverage=1 00:10:10.368 --rc genhtml_function_coverage=1 00:10:10.368 --rc genhtml_legend=1 00:10:10.368 --rc geninfo_all_blocks=1 00:10:10.368 --rc geninfo_unexecuted_blocks=1 00:10:10.368 00:10:10.368 ' 00:10:10.368 05:12:03 sw_hotplug -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:10.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.368 --rc genhtml_branch_coverage=1 00:10:10.368 --rc genhtml_function_coverage=1 00:10:10.368 --rc genhtml_legend=1 00:10:10.368 --rc geninfo_all_blocks=1 00:10:10.368 --rc geninfo_unexecuted_blocks=1 00:10:10.368 00:10:10.368 ' 00:10:10.368 05:12:03 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:10.627 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:10.627 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:10.627 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:10.627 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:10.627 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:10.627 05:12:03 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:10:10.627 05:12:03 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:10:10.627 05:12:03 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:10:10.627 05:12:03 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:10:10.627 05:12:03 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:10:10.627 05:12:03 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:10:10.627 05:12:03 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:10:10.627 05:12:03 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:10:10.627 05:12:03 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:10:10.627 05:12:03 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:10:10.627 05:12:03 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:10:10.627 05:12:03 sw_hotplug -- scripts/common.sh@233 -- # local class 00:10:10.627 05:12:03 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:10:10.627 05:12:03 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:10:10.627 05:12:03 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:10:10.627 05:12:03 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:10:10.627 05:12:03 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:10:10.627 05:12:03 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:10:10.627 05:12:03 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:10:10.627 05:12:03 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:10:10.627 05:12:03 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:10:10.627 05:12:03 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:10:10.627 05:12:03 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:10:10.627 05:12:03 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:10:10.627 05:12:03 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:10:10.627 05:12:03 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:10:10.627 05:12:03 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:10.627 05:12:03 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:10:10.627 05:12:03 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:10.627 05:12:03 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:10.627 05:12:03 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:10.627 05:12:03 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:10.627 05:12:03 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:10:10.627 05:12:03 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:10.627 05:12:03 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:10:10.627 05:12:03 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:10.627 05:12:03 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:10.627 05:12:03 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:10.627 05:12:03 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:10.627 05:12:03 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:10:10.627 05:12:03 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:10.627 05:12:03 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:10:10.627 05:12:03 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:10.627 05:12:03 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:10.627 05:12:03 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:10.628 05:12:03 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:10.628 05:12:03 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:10:10.628 05:12:03 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:10.628 05:12:03 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:10:10.628 05:12:03 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:10.628 05:12:03 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:10.628 05:12:03 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:10.628 05:12:03 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:10.628 05:12:03 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:10:10.628 05:12:03 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:10.628 05:12:03 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:10:10.628 05:12:03 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:10.628 05:12:03 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:10.628 05:12:03 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:10.628 05:12:03 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:10.628 05:12:03 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:10:10.628 05:12:03 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:10.628 05:12:03 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:10.628 05:12:03 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:10.628 05:12:03 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:10.628 05:12:03 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:10:10.628 05:12:03 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:10.628 05:12:03 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:10.628 05:12:03 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:10.628 05:12:03 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:10.628 05:12:03 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:10:10.628 05:12:03 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:10.628 05:12:03 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:10.628 05:12:03 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:10.628 05:12:03 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:10:10.628 05:12:03 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:10.628 05:12:03 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:10:10.628 05:12:03 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:10:10.628 05:12:03 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:11.194 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:11.194 Waiting for block devices as requested 00:10:11.194 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:11.194 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:11.451 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:11.451 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:16.719 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:16.719 05:12:09 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:10:16.719 05:12:09 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:16.979 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:10:16.979 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:16.979 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:10:17.238 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:10:17.496 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:17.496 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:17.496 05:12:10 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:10:17.496 05:12:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:17.754 05:12:10 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:10:17.754 05:12:10 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:10:17.754 05:12:10 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=78839 00:10:17.754 05:12:10 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:10:17.754 05:12:10 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:17.754 05:12:10 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:10:17.754 05:12:10 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:10:17.754 05:12:10 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:10:17.754 05:12:10 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:10:17.754 05:12:10 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:10:17.754 05:12:10 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:10:17.754 05:12:10 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 false 00:10:17.754 05:12:10 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:17.754 05:12:10 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:17.754 05:12:10 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:10:17.754 05:12:10 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:17.754 05:12:10 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:17.754 Initializing NVMe Controllers 00:10:17.754 Attaching to 0000:00:10.0 00:10:17.754 Attaching to 0000:00:11.0 00:10:17.754 Attached to 0000:00:10.0 00:10:17.754 Attached to 0000:00:11.0 00:10:17.754 Initialization complete. Starting I/O... 00:10:17.754 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:10:17.754 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:10:17.754 00:10:19.129 QEMU NVMe Ctrl (12340 ): 3018 I/Os completed (+3018) 00:10:19.129 QEMU NVMe Ctrl (12341 ): 3030 I/Os completed (+3030) 00:10:19.129 00:10:20.064 QEMU NVMe Ctrl (12340 ): 6798 I/Os completed (+3780) 00:10:20.064 QEMU NVMe Ctrl (12341 ): 6934 I/Os completed (+3904) 00:10:20.064 00:10:20.999 QEMU NVMe Ctrl (12340 ): 11747 I/Os completed (+4949) 00:10:20.999 QEMU NVMe Ctrl (12341 ): 12079 I/Os completed (+5145) 00:10:20.999 00:10:21.934 QEMU NVMe Ctrl (12340 ): 15912 I/Os completed (+4165) 00:10:21.934 QEMU NVMe Ctrl (12341 ): 16427 I/Os completed (+4348) 00:10:21.934 00:10:22.868 QEMU NVMe Ctrl (12340 ): 20132 I/Os completed (+4220) 00:10:22.869 QEMU NVMe Ctrl (12341 ): 20660 I/Os completed (+4233) 00:10:22.869 00:10:23.803 05:12:16 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:23.804 05:12:16 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:23.804 05:12:16 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:23.804 [2024-11-10 05:12:16.792079] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:23.804 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:23.804 [2024-11-10 05:12:16.793115] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:23.804 [2024-11-10 05:12:16.793160] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:23.804 [2024-11-10 05:12:16.793175] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:23.804 [2024-11-10 05:12:16.793191] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:23.804 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:23.804 [2024-11-10 05:12:16.794445] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:23.804 [2024-11-10 05:12:16.794489] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:23.804 [2024-11-10 05:12:16.794502] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:23.804 [2024-11-10 05:12:16.794518] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:23.804 05:12:16 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:23.804 05:12:16 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:23.804 [2024-11-10 05:12:16.816226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:23.804 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:23.804 [2024-11-10 05:12:16.817154] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:23.804 [2024-11-10 05:12:16.817190] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:23.804 [2024-11-10 05:12:16.817205] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:23.804 [2024-11-10 05:12:16.817218] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:23.804 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:23.804 [2024-11-10 05:12:16.818221] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:23.804 [2024-11-10 05:12:16.818248] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:23.804 [2024-11-10 05:12:16.818264] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:23.804 [2024-11-10 05:12:16.818276] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:23.804 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/device 00:10:23.804 EAL: Scan for (pci) bus failed. 00:10:23.804 05:12:16 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:23.804 05:12:16 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:23.804 05:12:16 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:23.804 05:12:16 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:23.804 05:12:16 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:23.804 00:10:23.804 05:12:16 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:23.804 05:12:16 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:23.804 05:12:16 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:23.804 05:12:16 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:23.804 05:12:16 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:23.804 Attaching to 0000:00:10.0 00:10:23.804 Attached to 0000:00:10.0 00:10:24.062 05:12:17 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:24.062 05:12:17 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:24.062 05:12:17 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:24.062 Attaching to 0000:00:11.0 00:10:24.062 Attached to 0000:00:11.0 00:10:24.997 QEMU NVMe Ctrl (12340 ): 4283 I/Os completed (+4283) 00:10:24.997 QEMU NVMe Ctrl (12341 ): 4002 I/Os completed (+4002) 00:10:24.997 00:10:25.934 QEMU NVMe Ctrl (12340 ): 8438 I/Os completed (+4155) 00:10:25.934 QEMU NVMe Ctrl (12341 ): 8345 I/Os completed (+4343) 00:10:25.934 00:10:26.881 QEMU NVMe Ctrl (12340 ): 13324 I/Os completed (+4886) 00:10:26.881 QEMU NVMe Ctrl (12341 ): 13465 I/Os completed (+5120) 00:10:26.881 00:10:27.822 QEMU NVMe Ctrl (12340 ): 17425 I/Os completed (+4101) 00:10:27.822 QEMU NVMe Ctrl (12341 ): 17587 I/Os completed (+4122) 00:10:27.822 00:10:28.756 QEMU NVMe Ctrl (12340 ): 21757 I/Os completed (+4332) 00:10:28.756 QEMU NVMe Ctrl (12341 ): 21853 I/Os completed (+4266) 00:10:28.756 00:10:30.130 QEMU NVMe Ctrl (12340 ): 26156 I/Os completed (+4399) 00:10:30.130 QEMU NVMe Ctrl (12341 ): 26134 I/Os completed (+4281) 00:10:30.130 00:10:31.065 QEMU NVMe Ctrl (12340 ): 30509 I/Os completed (+4353) 00:10:31.065 QEMU NVMe Ctrl (12341 ): 30419 I/Os completed (+4285) 00:10:31.065 00:10:31.998 QEMU NVMe Ctrl (12340 ): 34860 I/Os completed (+4351) 00:10:31.998 QEMU NVMe Ctrl (12341 ): 34667 I/Os completed (+4248) 00:10:31.998 00:10:32.931 QEMU NVMe Ctrl (12340 ): 39188 I/Os completed (+4328) 00:10:32.931 QEMU NVMe Ctrl (12341 ): 38972 I/Os completed (+4305) 00:10:32.931 00:10:33.870 QEMU NVMe Ctrl (12340 ): 42950 I/Os completed (+3762) 00:10:33.870 QEMU NVMe Ctrl (12341 ): 42719 I/Os completed (+3747) 00:10:33.870 00:10:34.839 QEMU NVMe Ctrl (12340 ): 47348 I/Os completed (+4398) 00:10:34.839 QEMU NVMe Ctrl (12341 ): 46962 I/Os completed (+4243) 00:10:34.839 00:10:35.774 QEMU NVMe Ctrl (12340 ): 51765 I/Os completed (+4417) 00:10:35.774 QEMU NVMe Ctrl (12341 ): 51193 I/Os completed (+4231) 00:10:35.774 00:10:36.033 05:12:29 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:36.033 05:12:29 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:36.033 05:12:29 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:36.033 05:12:29 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:36.033 [2024-11-10 05:12:29.070392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:36.033 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:36.033 [2024-11-10 05:12:29.071223] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.033 [2024-11-10 05:12:29.071254] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.033 [2024-11-10 05:12:29.071267] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.033 [2024-11-10 05:12:29.071283] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.033 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:36.033 [2024-11-10 05:12:29.072275] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.033 [2024-11-10 05:12:29.072310] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.033 [2024-11-10 05:12:29.072321] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.033 [2024-11-10 05:12:29.072333] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.033 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:10.0/vendor 00:10:36.033 EAL: Scan for (pci) bus failed. 00:10:36.033 05:12:29 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:36.033 05:12:29 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:36.033 [2024-11-10 05:12:29.091815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:36.033 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:36.033 [2024-11-10 05:12:29.092563] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.033 [2024-11-10 05:12:29.092595] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.033 [2024-11-10 05:12:29.092609] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.033 [2024-11-10 05:12:29.092621] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.033 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:36.033 [2024-11-10 05:12:29.093450] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.033 [2024-11-10 05:12:29.093482] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.033 [2024-11-10 05:12:29.093497] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.033 [2024-11-10 05:12:29.093507] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.033 EAL: eal_parse_sysfs_value(): cannot read sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:36.033 EAL: Scan for (pci) bus failed. 00:10:36.033 05:12:29 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:36.033 05:12:29 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:36.033 05:12:29 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:36.033 05:12:29 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:36.033 05:12:29 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:36.033 05:12:29 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:36.033 05:12:29 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:36.033 05:12:29 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:36.033 05:12:29 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:36.033 05:12:29 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:36.033 Attaching to 0000:00:10.0 00:10:36.033 Attached to 0000:00:10.0 00:10:36.291 05:12:29 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:36.291 05:12:29 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:36.291 05:12:29 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:36.291 Attaching to 0000:00:11.0 00:10:36.291 Attached to 0000:00:11.0 00:10:36.855 QEMU NVMe Ctrl (12340 ): 3034 I/Os completed (+3034) 00:10:36.855 QEMU NVMe Ctrl (12341 ): 2679 I/Os completed (+2679) 00:10:36.855 00:10:37.789 QEMU NVMe Ctrl (12340 ): 7411 I/Os completed (+4377) 00:10:37.789 QEMU NVMe Ctrl (12341 ): 6870 I/Os completed (+4191) 00:10:37.789 00:10:39.164 QEMU NVMe Ctrl (12340 ): 11601 I/Os completed (+4190) 00:10:39.164 QEMU NVMe Ctrl (12341 ): 10830 I/Os completed (+3960) 00:10:39.164 00:10:40.098 QEMU NVMe Ctrl (12340 ): 15384 I/Os completed (+3783) 00:10:40.098 QEMU NVMe Ctrl (12341 ): 14629 I/Os completed (+3799) 00:10:40.098 00:10:41.033 QEMU NVMe Ctrl (12340 ): 19657 I/Os completed (+4273) 00:10:41.033 QEMU NVMe Ctrl (12341 ): 18889 I/Os completed (+4260) 00:10:41.033 00:10:41.969 QEMU NVMe Ctrl (12340 ): 23849 I/Os completed (+4192) 00:10:41.969 QEMU NVMe Ctrl (12341 ): 23082 I/Os completed (+4193) 00:10:41.969 00:10:42.934 QEMU NVMe Ctrl (12340 ): 28194 I/Os completed (+4345) 00:10:42.934 QEMU NVMe Ctrl (12341 ): 27282 I/Os completed (+4200) 00:10:42.934 00:10:43.874 QEMU NVMe Ctrl (12340 ): 32686 I/Os completed (+4492) 00:10:43.874 QEMU NVMe Ctrl (12341 ): 31724 I/Os completed (+4442) 00:10:43.874 00:10:44.818 QEMU NVMe Ctrl (12340 ): 37167 I/Os completed (+4481) 00:10:44.818 QEMU NVMe Ctrl (12341 ): 36009 I/Os completed (+4285) 00:10:44.818 00:10:45.760 QEMU NVMe Ctrl (12340 ): 41545 I/Os completed (+4378) 00:10:45.760 QEMU NVMe Ctrl (12341 ): 40234 I/Os completed (+4225) 00:10:45.760 00:10:47.134 QEMU NVMe Ctrl (12340 ): 45476 I/Os completed (+3931) 00:10:47.134 QEMU NVMe Ctrl (12341 ): 44106 I/Os completed (+3872) 00:10:47.134 00:10:48.068 QEMU NVMe Ctrl (12340 ): 49323 I/Os completed (+3847) 00:10:48.068 QEMU NVMe Ctrl (12341 ): 47926 I/Os completed (+3820) 00:10:48.068 00:10:48.326 05:12:41 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:48.326 05:12:41 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:48.326 05:12:41 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:48.326 05:12:41 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:48.326 [2024-11-10 05:12:41.338611] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:48.326 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:48.326 [2024-11-10 05:12:41.339481] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:48.326 [2024-11-10 05:12:41.339511] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:48.327 [2024-11-10 05:12:41.339524] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:48.327 [2024-11-10 05:12:41.339541] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:48.327 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:48.327 [2024-11-10 05:12:41.340606] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:48.327 [2024-11-10 05:12:41.340637] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:48.327 [2024-11-10 05:12:41.340648] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:48.327 [2024-11-10 05:12:41.340659] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:48.327 05:12:41 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:48.327 05:12:41 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:48.327 [2024-11-10 05:12:41.358918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:48.327 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:48.327 [2024-11-10 05:12:41.359679] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:48.327 [2024-11-10 05:12:41.359711] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:48.327 [2024-11-10 05:12:41.359725] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:48.327 [2024-11-10 05:12:41.359737] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:48.327 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:48.327 [2024-11-10 05:12:41.360569] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:48.327 [2024-11-10 05:12:41.360599] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:48.327 [2024-11-10 05:12:41.360612] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:48.327 [2024-11-10 05:12:41.360622] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:48.327 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:48.327 EAL: Scan for (pci) bus failed. 00:10:48.327 05:12:41 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:48.327 05:12:41 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:48.327 05:12:41 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:48.327 05:12:41 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:48.327 05:12:41 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:48.327 05:12:41 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:48.327 05:12:41 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:48.327 05:12:41 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:48.327 05:12:41 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:48.327 05:12:41 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:48.327 Attaching to 0000:00:10.0 00:10:48.327 Attached to 0000:00:10.0 00:10:48.585 05:12:41 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:48.586 05:12:41 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:48.586 05:12:41 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:48.586 Attaching to 0000:00:11.0 00:10:48.586 Attached to 0000:00:11.0 00:10:48.586 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:48.586 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:48.586 [2024-11-10 05:12:41.602571] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:11:00.795 05:12:53 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:00.795 05:12:53 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:00.795 05:12:53 sw_hotplug -- common/autotest_common.sh@717 -- # time=42.81 00:11:00.795 05:12:53 sw_hotplug -- common/autotest_common.sh@718 -- # echo 42.81 00:11:00.795 05:12:53 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:11:00.795 05:12:53 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.81 00:11:00.795 05:12:53 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.81 2 00:11:00.795 remove_attach_helper took 42.81s to complete (handling 2 nvme drive(s)) 05:12:53 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:11:07.354 05:12:59 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 78839 00:11:07.354 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (78839) - No such process 00:11:07.354 05:12:59 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 78839 00:11:07.354 05:12:59 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:11:07.354 05:12:59 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:11:07.354 05:12:59 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:11:07.354 05:12:59 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=79395 00:11:07.354 05:12:59 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:11:07.354 05:12:59 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 79395 00:11:07.354 05:12:59 sw_hotplug -- common/autotest_common.sh@831 -- # '[' -z 79395 ']' 00:11:07.354 05:12:59 sw_hotplug -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:07.354 05:12:59 sw_hotplug -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:07.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:07.354 05:12:59 sw_hotplug -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:07.354 05:12:59 sw_hotplug -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:07.354 05:12:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:07.354 05:12:59 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:07.354 [2024-11-10 05:12:59.682506] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:07.354 [2024-11-10 05:12:59.683280] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79395 ] 00:11:07.354 [2024-11-10 05:12:59.831697] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:07.354 [2024-11-10 05:12:59.864448] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.354 05:13:00 sw_hotplug -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:07.354 05:13:00 sw_hotplug -- common/autotest_common.sh@864 -- # return 0 00:11:07.354 05:13:00 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:07.354 05:13:00 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.354 05:13:00 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:07.354 05:13:00 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.354 05:13:00 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:11:07.354 05:13:00 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:07.354 05:13:00 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:07.354 05:13:00 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:11:07.354 05:13:00 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:11:07.354 05:13:00 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:11:07.354 05:13:00 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:11:07.354 05:13:00 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:11:07.354 05:13:00 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:07.354 05:13:00 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:07.354 05:13:00 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:07.354 05:13:00 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:07.354 05:13:00 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:13.920 05:13:06 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:13.920 05:13:06 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:13.920 05:13:06 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:13.920 05:13:06 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:13.920 05:13:06 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:13.920 05:13:06 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:13.920 05:13:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:13.920 05:13:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:13.920 05:13:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:13.920 05:13:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:13.920 05:13:06 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.920 05:13:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:13.920 05:13:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:13.920 05:13:06 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.920 05:13:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:13.920 05:13:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:13.920 [2024-11-10 05:13:06.613305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:13.920 [2024-11-10 05:13:06.614398] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:13.920 [2024-11-10 05:13:06.614424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:13.920 [2024-11-10 05:13:06.614437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:13.920 [2024-11-10 05:13:06.614449] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:13.920 [2024-11-10 05:13:06.614458] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:13.920 [2024-11-10 05:13:06.614465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:13.920 [2024-11-10 05:13:06.614476] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:13.920 [2024-11-10 05:13:06.614483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:13.920 [2024-11-10 05:13:06.614491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:13.920 [2024-11-10 05:13:06.614498] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:13.920 [2024-11-10 05:13:06.614506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:13.920 [2024-11-10 05:13:06.614512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:13.920 [2024-11-10 05:13:07.013305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:13.920 [2024-11-10 05:13:07.014335] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:13.920 [2024-11-10 05:13:07.014364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:13.920 [2024-11-10 05:13:07.014374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:13.920 [2024-11-10 05:13:07.014386] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:13.920 [2024-11-10 05:13:07.014393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:13.920 [2024-11-10 05:13:07.014401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:13.920 [2024-11-10 05:13:07.014407] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:13.920 [2024-11-10 05:13:07.014415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:13.920 [2024-11-10 05:13:07.014421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:13.920 [2024-11-10 05:13:07.014430] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:13.920 [2024-11-10 05:13:07.014436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:13.920 [2024-11-10 05:13:07.014444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:13.920 05:13:07 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:13.920 05:13:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:13.920 05:13:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:13.921 05:13:07 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:13.921 05:13:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:13.921 05:13:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:13.921 05:13:07 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.921 05:13:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:13.921 05:13:07 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.921 05:13:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:13.921 05:13:07 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:14.178 05:13:07 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:14.178 05:13:07 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:14.178 05:13:07 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:14.178 05:13:07 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:14.178 05:13:07 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:14.178 05:13:07 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:14.179 05:13:07 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:14.179 05:13:07 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:14.179 05:13:07 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:14.179 05:13:07 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:14.179 05:13:07 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:26.404 05:13:19 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:26.404 05:13:19 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:26.404 05:13:19 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:26.405 05:13:19 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:26.405 05:13:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:26.405 05:13:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:26.405 05:13:19 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.405 05:13:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:26.405 05:13:19 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.405 05:13:19 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:26.405 05:13:19 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:26.405 05:13:19 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:26.405 05:13:19 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:26.405 [2024-11-10 05:13:19.413524] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:26.405 [2024-11-10 05:13:19.414715] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.405 [2024-11-10 05:13:19.414749] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:26.405 [2024-11-10 05:13:19.414761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:26.405 [2024-11-10 05:13:19.414772] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.405 [2024-11-10 05:13:19.414781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:26.405 [2024-11-10 05:13:19.414788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:26.405 [2024-11-10 05:13:19.414796] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.405 [2024-11-10 05:13:19.414803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:26.405 [2024-11-10 05:13:19.414811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:26.405 [2024-11-10 05:13:19.414817] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.405 [2024-11-10 05:13:19.414825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:26.405 [2024-11-10 05:13:19.414831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:26.405 05:13:19 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:26.405 05:13:19 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:26.405 05:13:19 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:26.405 05:13:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:26.405 05:13:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:26.405 05:13:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:26.405 05:13:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:26.405 05:13:19 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:26.405 05:13:19 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.405 05:13:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:26.405 05:13:19 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.405 05:13:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:26.405 05:13:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:26.665 [2024-11-10 05:13:19.813536] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:26.665 [2024-11-10 05:13:19.814556] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.665 [2024-11-10 05:13:19.814589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:26.665 [2024-11-10 05:13:19.814599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:26.665 [2024-11-10 05:13:19.814611] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.665 [2024-11-10 05:13:19.814619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:26.665 [2024-11-10 05:13:19.814627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:26.665 [2024-11-10 05:13:19.814633] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.665 [2024-11-10 05:13:19.814641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:26.665 [2024-11-10 05:13:19.814647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:26.665 [2024-11-10 05:13:19.814654] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.665 [2024-11-10 05:13:19.814661] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:26.665 [2024-11-10 05:13:19.814669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:27.035 05:13:19 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:27.035 05:13:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:27.035 05:13:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:27.035 05:13:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:27.035 05:13:19 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:27.035 05:13:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:27.035 05:13:19 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.035 05:13:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:27.035 05:13:19 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.035 05:13:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:27.035 05:13:19 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:27.035 05:13:20 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:27.035 05:13:20 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:27.035 05:13:20 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:27.035 05:13:20 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:27.035 05:13:20 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:27.035 05:13:20 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:27.035 05:13:20 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:27.035 05:13:20 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:27.035 05:13:20 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:27.035 05:13:20 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:27.035 05:13:20 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:39.312 05:13:32 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:39.312 05:13:32 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:39.312 05:13:32 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:39.312 05:13:32 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:39.312 05:13:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:39.312 05:13:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:39.312 05:13:32 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.312 05:13:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:39.312 05:13:32 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.312 05:13:32 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:39.312 05:13:32 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:39.312 05:13:32 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:39.312 05:13:32 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:39.312 05:13:32 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:39.312 05:13:32 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:39.312 [2024-11-10 05:13:32.313784] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:39.312 [2024-11-10 05:13:32.314875] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:39.312 [2024-11-10 05:13:32.314909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:39.312 [2024-11-10 05:13:32.314922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.312 [2024-11-10 05:13:32.314934] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:39.312 [2024-11-10 05:13:32.314943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:39.312 [2024-11-10 05:13:32.314949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.312 [2024-11-10 05:13:32.314957] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:39.312 [2024-11-10 05:13:32.314964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:39.312 [2024-11-10 05:13:32.314972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.312 [2024-11-10 05:13:32.314978] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:39.312 [2024-11-10 05:13:32.314985] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:39.312 [2024-11-10 05:13:32.315001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.312 05:13:32 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:39.313 05:13:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:39.313 05:13:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:39.313 05:13:32 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:39.313 05:13:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:39.313 05:13:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:39.313 05:13:32 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.313 05:13:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:39.313 05:13:32 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.313 05:13:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:39.313 05:13:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:39.573 [2024-11-10 05:13:32.713788] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:39.573 [2024-11-10 05:13:32.714771] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:39.573 [2024-11-10 05:13:32.714803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:39.573 [2024-11-10 05:13:32.714816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.573 [2024-11-10 05:13:32.714828] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:39.573 [2024-11-10 05:13:32.714835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:39.573 [2024-11-10 05:13:32.714845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.573 [2024-11-10 05:13:32.714851] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:39.573 [2024-11-10 05:13:32.714859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:39.573 [2024-11-10 05:13:32.714866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.573 [2024-11-10 05:13:32.714874] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:39.573 [2024-11-10 05:13:32.714880] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:39.573 [2024-11-10 05:13:32.714887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.834 05:13:32 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:39.834 05:13:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:39.834 05:13:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:39.834 05:13:32 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:39.834 05:13:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:39.834 05:13:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:39.834 05:13:32 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.834 05:13:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:39.834 05:13:32 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.834 05:13:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:39.834 05:13:32 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:39.834 05:13:32 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:39.834 05:13:32 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:39.834 05:13:32 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:39.834 05:13:33 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:39.834 05:13:33 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:39.834 05:13:33 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:39.834 05:13:33 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:39.834 05:13:33 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:40.095 05:13:33 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:40.095 05:13:33 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:40.095 05:13:33 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:52.333 05:13:45 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:52.333 05:13:45 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:52.333 05:13:45 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:52.333 05:13:45 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:52.333 05:13:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:52.333 05:13:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:52.333 05:13:45 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.333 05:13:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:52.333 05:13:45 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.333 05:13:45 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:52.333 05:13:45 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:52.333 05:13:45 sw_hotplug -- common/autotest_common.sh@717 -- # time=44.64 00:11:52.333 05:13:45 sw_hotplug -- common/autotest_common.sh@718 -- # echo 44.64 00:11:52.333 05:13:45 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:11:52.333 05:13:45 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.64 00:11:52.333 05:13:45 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.64 2 00:11:52.333 remove_attach_helper took 44.64s to complete (handling 2 nvme drive(s)) 05:13:45 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:11:52.333 05:13:45 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.333 05:13:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:52.333 05:13:45 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.333 05:13:45 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:52.333 05:13:45 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.333 05:13:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:52.333 05:13:45 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.333 05:13:45 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:11:52.333 05:13:45 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:52.333 05:13:45 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:52.333 05:13:45 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:11:52.333 05:13:45 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:11:52.333 05:13:45 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:11:52.333 05:13:45 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:11:52.333 05:13:45 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:11:52.333 05:13:45 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:52.333 05:13:45 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:52.333 05:13:45 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:52.333 05:13:45 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:52.333 05:13:45 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:58.958 05:13:51 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:58.958 05:13:51 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:58.958 05:13:51 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:58.958 05:13:51 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:58.958 05:13:51 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:58.958 05:13:51 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:58.958 05:13:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:58.958 05:13:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:58.958 05:13:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:58.958 05:13:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:58.958 05:13:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:58.958 05:13:51 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.958 05:13:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:58.958 05:13:51 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.958 05:13:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:58.958 05:13:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:58.958 [2024-11-10 05:13:51.288850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:58.958 [2024-11-10 05:13:51.289633] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:58.958 [2024-11-10 05:13:51.289660] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:58.958 [2024-11-10 05:13:51.289672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:58.958 [2024-11-10 05:13:51.289683] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:58.958 [2024-11-10 05:13:51.289692] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:58.958 [2024-11-10 05:13:51.289699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:58.958 [2024-11-10 05:13:51.289707] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:58.958 [2024-11-10 05:13:51.289713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:58.958 [2024-11-10 05:13:51.289725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:58.959 [2024-11-10 05:13:51.289731] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:58.959 [2024-11-10 05:13:51.289739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:58.959 [2024-11-10 05:13:51.289745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:58.959 05:13:51 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:58.959 05:13:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:58.959 05:13:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:58.959 [2024-11-10 05:13:51.788859] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:58.959 05:13:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:58.959 [2024-11-10 05:13:51.789620] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:58.959 [2024-11-10 05:13:51.789651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:58.959 [2024-11-10 05:13:51.789660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:58.959 [2024-11-10 05:13:51.789672] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:58.959 [2024-11-10 05:13:51.789679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:58.959 [2024-11-10 05:13:51.789688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:58.959 [2024-11-10 05:13:51.789694] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:58.959 [2024-11-10 05:13:51.789702] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:58.959 [2024-11-10 05:13:51.789708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:58.959 [2024-11-10 05:13:51.789716] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:58.959 [2024-11-10 05:13:51.789722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:58.959 [2024-11-10 05:13:51.789732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:58.959 05:13:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:58.959 05:13:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:58.959 05:13:51 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.959 05:13:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:58.959 05:13:51 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.959 05:13:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:58.959 05:13:51 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:58.959 05:13:51 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:58.959 05:13:51 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:58.959 05:13:51 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:58.959 05:13:51 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:58.959 05:13:51 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:58.959 05:13:51 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:58.959 05:13:51 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:58.959 05:13:51 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:58.959 05:13:52 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:58.959 05:13:52 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:58.959 05:13:52 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:11.227 05:14:04 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:11.227 05:14:04 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:11.227 05:14:04 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:11.227 05:14:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:11.227 05:14:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:11.227 05:14:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:11.227 05:14:04 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.227 05:14:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:11.227 05:14:04 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.227 05:14:04 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:11.227 05:14:04 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:11.227 05:14:04 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:11.227 05:14:04 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:11.227 [2024-11-10 05:14:04.089115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:11.227 [2024-11-10 05:14:04.089976] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:11.227 [2024-11-10 05:14:04.090013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:11.227 [2024-11-10 05:14:04.090025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.227 [2024-11-10 05:14:04.090037] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:11.227 [2024-11-10 05:14:04.090048] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:11.227 [2024-11-10 05:14:04.090055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.227 [2024-11-10 05:14:04.090063] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:11.227 [2024-11-10 05:14:04.090069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:11.227 [2024-11-10 05:14:04.090077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.227 [2024-11-10 05:14:04.090083] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:11.227 [2024-11-10 05:14:04.090091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:11.227 [2024-11-10 05:14:04.090097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.227 05:14:04 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:11.227 05:14:04 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:11.227 05:14:04 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:11.227 05:14:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:11.227 05:14:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:11.227 05:14:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:11.227 05:14:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:11.227 05:14:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:11.227 05:14:04 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.227 05:14:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:11.227 05:14:04 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.227 05:14:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:11.227 05:14:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:11.488 [2024-11-10 05:14:04.489115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:11.488 [2024-11-10 05:14:04.489850] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:11.488 [2024-11-10 05:14:04.489885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:11.488 [2024-11-10 05:14:04.489894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.488 [2024-11-10 05:14:04.489906] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:11.488 [2024-11-10 05:14:04.489913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:11.488 [2024-11-10 05:14:04.489921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.488 [2024-11-10 05:14:04.489927] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:11.488 [2024-11-10 05:14:04.489935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:11.488 [2024-11-10 05:14:04.489942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.488 [2024-11-10 05:14:04.489949] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:11.488 [2024-11-10 05:14:04.489955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:11.488 [2024-11-10 05:14:04.489963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.488 05:14:04 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:11.488 05:14:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:11.488 05:14:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:11.488 05:14:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:11.488 05:14:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:11.488 05:14:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:11.488 05:14:04 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.488 05:14:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:11.488 05:14:04 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.488 05:14:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:11.488 05:14:04 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:11.748 05:14:04 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:11.748 05:14:04 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:11.748 05:14:04 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:11.748 05:14:04 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:11.748 05:14:04 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:11.748 05:14:04 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:11.748 05:14:04 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:11.748 05:14:04 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:11.748 05:14:04 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:11.748 05:14:04 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:11.748 05:14:04 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:24.015 05:14:16 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:24.015 05:14:16 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:24.015 05:14:16 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:24.015 05:14:16 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:24.015 05:14:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:24.015 05:14:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:24.015 05:14:16 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.015 05:14:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:24.015 05:14:16 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.015 05:14:16 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:24.015 05:14:16 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:24.015 05:14:16 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:24.015 05:14:16 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:24.015 05:14:16 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:24.015 05:14:16 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:24.015 05:14:16 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:24.015 05:14:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:24.015 05:14:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:24.015 05:14:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:24.015 05:14:16 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:24.015 05:14:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:24.015 05:14:16 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.015 05:14:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:24.015 05:14:16 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.015 [2024-11-10 05:14:16.989351] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:24.015 [2024-11-10 05:14:16.990129] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:24.015 [2024-11-10 05:14:16.990154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:24.015 [2024-11-10 05:14:16.990166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.015 [2024-11-10 05:14:16.990178] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:24.015 [2024-11-10 05:14:16.990190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:24.015 [2024-11-10 05:14:16.990197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.015 [2024-11-10 05:14:16.990205] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:24.015 [2024-11-10 05:14:16.990211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:24.015 [2024-11-10 05:14:16.990220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.015 [2024-11-10 05:14:16.990226] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:24.015 [2024-11-10 05:14:16.990233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:24.015 [2024-11-10 05:14:16.990240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.015 05:14:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:24.015 05:14:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:24.304 [2024-11-10 05:14:17.489354] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:24.304 [2024-11-10 05:14:17.490099] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:24.304 [2024-11-10 05:14:17.490130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:24.304 [2024-11-10 05:14:17.490139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.304 [2024-11-10 05:14:17.490150] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:24.304 [2024-11-10 05:14:17.490158] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:24.304 [2024-11-10 05:14:17.490166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.304 [2024-11-10 05:14:17.490172] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:24.304 [2024-11-10 05:14:17.490182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:24.304 [2024-11-10 05:14:17.490189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.304 [2024-11-10 05:14:17.490196] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:24.304 [2024-11-10 05:14:17.490202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:24.304 [2024-11-10 05:14:17.490210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.304 05:14:17 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:24.304 05:14:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:24.304 05:14:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:24.304 05:14:17 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:24.304 05:14:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:24.304 05:14:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:24.304 05:14:17 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.304 05:14:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:24.304 05:14:17 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.304 05:14:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:24.304 05:14:17 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:24.566 05:14:17 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:24.566 05:14:17 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:24.566 05:14:17 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:24.566 05:14:17 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:24.566 05:14:17 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:24.566 05:14:17 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:24.566 05:14:17 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:24.566 05:14:17 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:24.566 05:14:17 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:24.567 05:14:17 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:24.567 05:14:17 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:36.846 05:14:29 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:36.846 05:14:29 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:36.846 05:14:29 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:36.846 05:14:29 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:36.846 05:14:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:36.846 05:14:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:36.846 05:14:29 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.846 05:14:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:36.846 05:14:29 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.846 05:14:29 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:36.846 05:14:29 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:36.846 05:14:29 sw_hotplug -- common/autotest_common.sh@717 -- # time=44.60 00:12:36.846 05:14:29 sw_hotplug -- common/autotest_common.sh@718 -- # echo 44.60 00:12:36.846 05:14:29 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:12:36.846 05:14:29 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.60 00:12:36.846 05:14:29 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.60 2 00:12:36.846 remove_attach_helper took 44.60s to complete (handling 2 nvme drive(s)) 05:14:29 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:12:36.846 05:14:29 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 79395 00:12:36.846 05:14:29 sw_hotplug -- common/autotest_common.sh@950 -- # '[' -z 79395 ']' 00:12:36.846 05:14:29 sw_hotplug -- common/autotest_common.sh@954 -- # kill -0 79395 00:12:36.846 05:14:29 sw_hotplug -- common/autotest_common.sh@955 -- # uname 00:12:36.846 05:14:29 sw_hotplug -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:36.846 05:14:29 sw_hotplug -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79395 00:12:36.846 05:14:29 sw_hotplug -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:36.846 05:14:29 sw_hotplug -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:36.846 killing process with pid 79395 00:12:36.846 05:14:29 sw_hotplug -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79395' 00:12:36.846 05:14:29 sw_hotplug -- common/autotest_common.sh@969 -- # kill 79395 00:12:36.846 05:14:29 sw_hotplug -- common/autotest_common.sh@974 -- # wait 79395 00:12:37.108 05:14:30 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:37.369 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:37.629 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:37.629 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:37.890 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:37.890 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:37.890 00:12:37.890 real 2m27.812s 00:12:37.890 user 1m48.358s 00:12:37.890 sys 0m17.833s 00:12:37.890 05:14:31 sw_hotplug -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:37.890 ************************************ 00:12:37.890 END TEST sw_hotplug 00:12:37.890 ************************************ 00:12:37.890 05:14:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:37.890 05:14:31 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:12:37.890 05:14:31 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:37.890 05:14:31 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:37.890 05:14:31 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:37.890 05:14:31 -- common/autotest_common.sh@10 -- # set +x 00:12:37.890 ************************************ 00:12:37.890 START TEST nvme_xnvme 00:12:37.890 ************************************ 00:12:37.890 05:14:31 nvme_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:38.151 * Looking for test storage... 00:12:38.151 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:38.151 05:14:31 nvme_xnvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:38.151 05:14:31 nvme_xnvme -- common/autotest_common.sh@1681 -- # lcov --version 00:12:38.151 05:14:31 nvme_xnvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:38.151 05:14:31 nvme_xnvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:38.151 05:14:31 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:38.151 05:14:31 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:38.151 05:14:31 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:38.151 05:14:31 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:38.151 05:14:31 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:38.151 05:14:31 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:38.151 05:14:31 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:38.151 05:14:31 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:38.151 05:14:31 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:38.151 05:14:31 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:38.151 05:14:31 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:38.151 05:14:31 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:38.151 05:14:31 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:38.152 05:14:31 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:38.152 05:14:31 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:38.152 05:14:31 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:38.152 05:14:31 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:38.152 05:14:31 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:38.152 05:14:31 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:38.152 05:14:31 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:38.152 05:14:31 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:38.152 05:14:31 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:38.152 05:14:31 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:38.152 05:14:31 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:38.152 05:14:31 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:38.152 05:14:31 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:38.152 05:14:31 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:38.152 05:14:31 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:38.152 05:14:31 nvme_xnvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:38.152 05:14:31 nvme_xnvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:38.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:38.152 --rc genhtml_branch_coverage=1 00:12:38.152 --rc genhtml_function_coverage=1 00:12:38.152 --rc genhtml_legend=1 00:12:38.152 --rc geninfo_all_blocks=1 00:12:38.152 --rc geninfo_unexecuted_blocks=1 00:12:38.152 00:12:38.152 ' 00:12:38.152 05:14:31 nvme_xnvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:38.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:38.152 --rc genhtml_branch_coverage=1 00:12:38.152 --rc genhtml_function_coverage=1 00:12:38.152 --rc genhtml_legend=1 00:12:38.152 --rc geninfo_all_blocks=1 00:12:38.152 --rc geninfo_unexecuted_blocks=1 00:12:38.152 00:12:38.152 ' 00:12:38.152 05:14:31 nvme_xnvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:38.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:38.152 --rc genhtml_branch_coverage=1 00:12:38.152 --rc genhtml_function_coverage=1 00:12:38.152 --rc genhtml_legend=1 00:12:38.152 --rc geninfo_all_blocks=1 00:12:38.152 --rc geninfo_unexecuted_blocks=1 00:12:38.152 00:12:38.152 ' 00:12:38.152 05:14:31 nvme_xnvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:38.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:38.152 --rc genhtml_branch_coverage=1 00:12:38.152 --rc genhtml_function_coverage=1 00:12:38.152 --rc genhtml_legend=1 00:12:38.152 --rc geninfo_all_blocks=1 00:12:38.152 --rc geninfo_unexecuted_blocks=1 00:12:38.152 00:12:38.152 ' 00:12:38.152 05:14:31 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:38.152 05:14:31 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:38.152 05:14:31 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:38.152 05:14:31 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:38.152 05:14:31 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:38.152 05:14:31 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.152 05:14:31 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.152 05:14:31 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.152 05:14:31 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:38.152 05:14:31 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.152 05:14:31 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:12:38.152 05:14:31 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:38.152 05:14:31 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:38.152 05:14:31 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:38.152 ************************************ 00:12:38.152 START TEST xnvme_to_malloc_dd_copy 00:12:38.152 ************************************ 00:12:38.152 05:14:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1125 -- # malloc_to_xnvme_copy 00:12:38.152 05:14:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:12:38.152 05:14:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:12:38.152 05:14:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:12:38.152 05:14:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:12:38.152 05:14:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:12:38.152 05:14:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:12:38.152 05:14:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:38.152 05:14:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:12:38.152 05:14:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:12:38.152 05:14:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:12:38.152 05:14:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:12:38.152 05:14:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:12:38.152 05:14:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:12:38.152 05:14:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:12:38.152 05:14:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:12:38.152 05:14:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:12:38.152 05:14:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:38.152 05:14:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:38.152 05:14:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:38.152 05:14:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:38.152 05:14:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:38.152 05:14:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:38.152 05:14:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:38.152 05:14:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:38.152 { 00:12:38.152 "subsystems": [ 00:12:38.152 { 00:12:38.152 "subsystem": "bdev", 00:12:38.152 "config": [ 00:12:38.152 { 00:12:38.152 "params": { 00:12:38.152 "block_size": 512, 00:12:38.152 "num_blocks": 2097152, 00:12:38.152 "name": "malloc0" 00:12:38.152 }, 00:12:38.152 "method": "bdev_malloc_create" 00:12:38.152 }, 00:12:38.152 { 00:12:38.152 "params": { 00:12:38.152 "io_mechanism": "libaio", 00:12:38.152 "filename": "/dev/nullb0", 00:12:38.152 "name": "null0" 00:12:38.152 }, 00:12:38.152 "method": "bdev_xnvme_create" 00:12:38.152 }, 00:12:38.152 { 00:12:38.152 "method": "bdev_wait_for_examine" 00:12:38.152 } 00:12:38.152 ] 00:12:38.152 } 00:12:38.152 ] 00:12:38.152 } 00:12:38.152 [2024-11-10 05:14:31.353521] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:38.152 [2024-11-10 05:14:31.353670] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80750 ] 00:12:38.413 [2024-11-10 05:14:31.504986] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:38.413 [2024-11-10 05:14:31.553873] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:39.799  [2024-11-10T05:14:33.981Z] Copying: 223/1024 [MB] (223 MBps) [2024-11-10T05:14:34.925Z] Copying: 448/1024 [MB] (224 MBps) [2024-11-10T05:14:35.869Z] Copying: 716/1024 [MB] (268 MBps) [2024-11-10T05:14:36.441Z] Copying: 1024/1024 [MB] (average 256 MBps) 00:12:43.205 00:12:43.205 05:14:36 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:43.205 05:14:36 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:43.205 05:14:36 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:43.205 05:14:36 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:43.205 { 00:12:43.205 "subsystems": [ 00:12:43.205 { 00:12:43.205 "subsystem": "bdev", 00:12:43.205 "config": [ 00:12:43.205 { 00:12:43.205 "params": { 00:12:43.205 "block_size": 512, 00:12:43.205 "num_blocks": 2097152, 00:12:43.205 "name": "malloc0" 00:12:43.205 }, 00:12:43.205 "method": "bdev_malloc_create" 00:12:43.205 }, 00:12:43.205 { 00:12:43.205 "params": { 00:12:43.205 "io_mechanism": "libaio", 00:12:43.205 "filename": "/dev/nullb0", 00:12:43.205 "name": "null0" 00:12:43.205 }, 00:12:43.205 "method": "bdev_xnvme_create" 00:12:43.205 }, 00:12:43.205 { 00:12:43.205 "method": "bdev_wait_for_examine" 00:12:43.205 } 00:12:43.205 ] 00:12:43.205 } 00:12:43.205 ] 00:12:43.205 } 00:12:43.205 [2024-11-10 05:14:36.242200] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:43.205 [2024-11-10 05:14:36.242304] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80810 ] 00:12:43.205 [2024-11-10 05:14:36.389248] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:43.205 [2024-11-10 05:14:36.425400] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:44.592  [2024-11-10T05:14:38.771Z] Copying: 308/1024 [MB] (308 MBps) [2024-11-10T05:14:39.714Z] Copying: 616/1024 [MB] (308 MBps) [2024-11-10T05:14:40.286Z] Copying: 926/1024 [MB] (309 MBps) [2024-11-10T05:14:40.548Z] Copying: 1024/1024 [MB] (average 308 MBps) 00:12:47.312 00:12:47.312 05:14:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:47.312 05:14:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:47.312 05:14:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:47.312 05:14:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:47.312 05:14:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:47.312 05:14:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:47.312 { 00:12:47.312 "subsystems": [ 00:12:47.312 { 00:12:47.312 "subsystem": "bdev", 00:12:47.312 "config": [ 00:12:47.312 { 00:12:47.312 "params": { 00:12:47.312 "block_size": 512, 00:12:47.312 "num_blocks": 2097152, 00:12:47.312 "name": "malloc0" 00:12:47.312 }, 00:12:47.312 "method": "bdev_malloc_create" 00:12:47.312 }, 00:12:47.312 { 00:12:47.312 "params": { 00:12:47.312 "io_mechanism": "io_uring", 00:12:47.312 "filename": "/dev/nullb0", 00:12:47.312 "name": "null0" 00:12:47.312 }, 00:12:47.312 "method": "bdev_xnvme_create" 00:12:47.312 }, 00:12:47.312 { 00:12:47.312 "method": "bdev_wait_for_examine" 00:12:47.312 } 00:12:47.312 ] 00:12:47.312 } 00:12:47.312 ] 00:12:47.312 } 00:12:47.312 [2024-11-10 05:14:40.402439] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:47.312 [2024-11-10 05:14:40.402559] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80864 ] 00:12:47.573 [2024-11-10 05:14:40.556899] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:47.573 [2024-11-10 05:14:40.608247] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:48.963  [2024-11-10T05:14:43.210Z] Copying: 229/1024 [MB] (229 MBps) [2024-11-10T05:14:44.153Z] Copying: 509/1024 [MB] (280 MBps) [2024-11-10T05:14:44.726Z] Copying: 825/1024 [MB] (315 MBps) [2024-11-10T05:14:44.986Z] Copying: 1024/1024 [MB] (average 282 MBps) 00:12:51.750 00:12:51.750 05:14:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:51.750 05:14:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:51.750 05:14:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:51.750 05:14:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:51.750 { 00:12:51.750 "subsystems": [ 00:12:51.750 { 00:12:51.750 "subsystem": "bdev", 00:12:51.750 "config": [ 00:12:51.750 { 00:12:51.750 "params": { 00:12:51.750 "block_size": 512, 00:12:51.750 "num_blocks": 2097152, 00:12:51.750 "name": "malloc0" 00:12:51.750 }, 00:12:51.750 "method": "bdev_malloc_create" 00:12:51.750 }, 00:12:51.750 { 00:12:51.750 "params": { 00:12:51.750 "io_mechanism": "io_uring", 00:12:51.750 "filename": "/dev/nullb0", 00:12:51.750 "name": "null0" 00:12:51.750 }, 00:12:51.750 "method": "bdev_xnvme_create" 00:12:51.750 }, 00:12:51.750 { 00:12:51.750 "method": "bdev_wait_for_examine" 00:12:51.750 } 00:12:51.750 ] 00:12:51.750 } 00:12:51.750 ] 00:12:51.750 } 00:12:51.750 [2024-11-10 05:14:44.954673] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:51.750 [2024-11-10 05:14:44.954796] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80926 ] 00:12:52.010 [2024-11-10 05:14:45.101882] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:52.010 [2024-11-10 05:14:45.138313] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:53.393  [2024-11-10T05:14:47.572Z] Copying: 321/1024 [MB] (321 MBps) [2024-11-10T05:14:48.514Z] Copying: 645/1024 [MB] (323 MBps) [2024-11-10T05:14:48.775Z] Copying: 969/1024 [MB] (323 MBps) [2024-11-10T05:14:49.036Z] Copying: 1024/1024 [MB] (average 323 MBps) 00:12:55.800 00:12:55.800 05:14:48 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:12:55.800 05:14:48 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:12:55.800 00:12:55.800 real 0m17.622s 00:12:55.800 user 0m14.431s 00:12:55.800 sys 0m2.649s 00:12:55.800 05:14:48 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:55.800 ************************************ 00:12:55.800 END TEST xnvme_to_malloc_dd_copy 00:12:55.800 ************************************ 00:12:55.800 05:14:48 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:55.800 05:14:48 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:55.800 05:14:48 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:55.800 05:14:48 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:55.800 05:14:48 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:55.800 ************************************ 00:12:55.800 START TEST xnvme_bdevperf 00:12:55.800 ************************************ 00:12:55.800 05:14:48 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1125 -- # xnvme_bdevperf 00:12:55.800 05:14:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:12:55.800 05:14:48 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:12:55.800 05:14:48 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:12:55.800 05:14:48 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:12:55.800 05:14:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:12:55.800 05:14:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:55.800 05:14:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:12:55.800 05:14:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:12:55.800 05:14:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:12:55.800 05:14:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:12:55.800 05:14:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:12:55.800 05:14:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:12:55.800 05:14:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:55.800 05:14:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:55.800 05:14:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:55.800 05:14:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:55.800 05:14:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:55.800 05:14:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:55.800 05:14:48 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:55.800 05:14:48 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:55.800 { 00:12:55.800 "subsystems": [ 00:12:55.800 { 00:12:55.800 "subsystem": "bdev", 00:12:55.800 "config": [ 00:12:55.800 { 00:12:55.800 "params": { 00:12:55.800 "io_mechanism": "libaio", 00:12:55.800 "filename": "/dev/nullb0", 00:12:55.800 "name": "null0" 00:12:55.800 }, 00:12:55.800 "method": "bdev_xnvme_create" 00:12:55.800 }, 00:12:55.800 { 00:12:55.800 "method": "bdev_wait_for_examine" 00:12:55.800 } 00:12:55.800 ] 00:12:55.800 } 00:12:55.800 ] 00:12:55.800 } 00:12:55.800 [2024-11-10 05:14:48.987121] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:55.800 [2024-11-10 05:14:48.987207] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80998 ] 00:12:56.061 [2024-11-10 05:14:49.126261] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:56.061 [2024-11-10 05:14:49.160220] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:56.061 Running I/O for 5 seconds... 00:12:58.020 210176.00 IOPS, 821.00 MiB/s [2024-11-10T05:14:52.645Z] 210496.00 IOPS, 822.25 MiB/s [2024-11-10T05:14:53.587Z] 210730.67 IOPS, 823.17 MiB/s [2024-11-10T05:14:54.530Z] 210896.00 IOPS, 823.81 MiB/s [2024-11-10T05:14:54.530Z] 211008.00 IOPS, 824.25 MiB/s 00:13:01.294 Latency(us) 00:13:01.294 [2024-11-10T05:14:54.530Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:01.294 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:01.294 null0 : 5.00 210940.77 823.99 0.00 0.00 301.39 296.17 1638.40 00:13:01.294 [2024-11-10T05:14:54.530Z] =================================================================================================================== 00:13:01.294 [2024-11-10T05:14:54.530Z] Total : 210940.77 823.99 0.00 0.00 301.39 296.17 1638.40 00:13:01.294 05:14:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:13:01.294 05:14:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:13:01.294 05:14:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:13:01.294 05:14:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:13:01.294 05:14:54 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:01.294 05:14:54 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:01.294 { 00:13:01.294 "subsystems": [ 00:13:01.294 { 00:13:01.294 "subsystem": "bdev", 00:13:01.294 "config": [ 00:13:01.294 { 00:13:01.294 "params": { 00:13:01.294 "io_mechanism": "io_uring", 00:13:01.294 "filename": "/dev/nullb0", 00:13:01.294 "name": "null0" 00:13:01.294 }, 00:13:01.294 "method": "bdev_xnvme_create" 00:13:01.294 }, 00:13:01.294 { 00:13:01.295 "method": "bdev_wait_for_examine" 00:13:01.295 } 00:13:01.295 ] 00:13:01.295 } 00:13:01.295 ] 00:13:01.295 } 00:13:01.295 [2024-11-10 05:14:54.462960] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:01.295 [2024-11-10 05:14:54.463090] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81071 ] 00:13:01.556 [2024-11-10 05:14:54.606736] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:01.556 [2024-11-10 05:14:54.640666] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:01.556 Running I/O for 5 seconds... 00:13:03.885 237376.00 IOPS, 927.25 MiB/s [2024-11-10T05:14:58.095Z] 237344.00 IOPS, 927.12 MiB/s [2024-11-10T05:14:59.042Z] 237397.33 IOPS, 927.33 MiB/s [2024-11-10T05:14:59.986Z] 237424.00 IOPS, 927.44 MiB/s [2024-11-10T05:14:59.986Z] 237427.20 IOPS, 927.45 MiB/s 00:13:06.750 Latency(us) 00:13:06.750 [2024-11-10T05:14:59.986Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:06.750 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:06.750 null0 : 5.00 237362.10 927.20 0.00 0.00 267.57 231.58 1953.48 00:13:06.750 [2024-11-10T05:14:59.986Z] =================================================================================================================== 00:13:06.750 [2024-11-10T05:14:59.986Z] Total : 237362.10 927.20 0.00 0.00 267.57 231.58 1953.48 00:13:06.750 05:14:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:13:06.750 05:14:59 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:13:06.750 00:13:06.750 real 0m10.951s 00:13:06.750 user 0m8.615s 00:13:06.750 sys 0m2.100s 00:13:06.750 05:14:59 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:06.750 ************************************ 00:13:06.750 END TEST xnvme_bdevperf 00:13:06.750 05:14:59 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:06.750 ************************************ 00:13:06.750 00:13:06.750 real 0m28.815s 00:13:06.750 user 0m23.158s 00:13:06.750 sys 0m4.867s 00:13:06.750 05:14:59 nvme_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:06.750 05:14:59 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:06.750 ************************************ 00:13:06.750 END TEST nvme_xnvme 00:13:06.750 ************************************ 00:13:06.750 05:14:59 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:13:06.750 05:14:59 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:06.750 05:14:59 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:06.750 05:14:59 -- common/autotest_common.sh@10 -- # set +x 00:13:06.750 ************************************ 00:13:06.750 START TEST blockdev_xnvme 00:13:06.750 ************************************ 00:13:06.750 05:14:59 blockdev_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:13:07.012 * Looking for test storage... 00:13:07.012 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:13:07.012 05:15:00 blockdev_xnvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:07.012 05:15:00 blockdev_xnvme -- common/autotest_common.sh@1681 -- # lcov --version 00:13:07.012 05:15:00 blockdev_xnvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:07.012 05:15:00 blockdev_xnvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:07.012 05:15:00 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:07.012 05:15:00 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:07.012 05:15:00 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:07.012 05:15:00 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:13:07.012 05:15:00 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:13:07.012 05:15:00 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:13:07.012 05:15:00 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:13:07.012 05:15:00 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:13:07.012 05:15:00 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:13:07.012 05:15:00 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:13:07.012 05:15:00 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:07.012 05:15:00 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:13:07.012 05:15:00 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:13:07.012 05:15:00 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:07.013 05:15:00 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:07.013 05:15:00 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:13:07.013 05:15:00 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:13:07.013 05:15:00 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:07.013 05:15:00 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:13:07.013 05:15:00 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:13:07.013 05:15:00 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:13:07.013 05:15:00 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:13:07.013 05:15:00 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:07.013 05:15:00 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:13:07.013 05:15:00 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:13:07.013 05:15:00 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:07.013 05:15:00 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:07.013 05:15:00 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:13:07.013 05:15:00 blockdev_xnvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:07.013 05:15:00 blockdev_xnvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:07.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.013 --rc genhtml_branch_coverage=1 00:13:07.013 --rc genhtml_function_coverage=1 00:13:07.013 --rc genhtml_legend=1 00:13:07.013 --rc geninfo_all_blocks=1 00:13:07.013 --rc geninfo_unexecuted_blocks=1 00:13:07.013 00:13:07.013 ' 00:13:07.013 05:15:00 blockdev_xnvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:07.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.013 --rc genhtml_branch_coverage=1 00:13:07.013 --rc genhtml_function_coverage=1 00:13:07.013 --rc genhtml_legend=1 00:13:07.013 --rc geninfo_all_blocks=1 00:13:07.013 --rc geninfo_unexecuted_blocks=1 00:13:07.013 00:13:07.013 ' 00:13:07.013 05:15:00 blockdev_xnvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:07.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.013 --rc genhtml_branch_coverage=1 00:13:07.013 --rc genhtml_function_coverage=1 00:13:07.013 --rc genhtml_legend=1 00:13:07.013 --rc geninfo_all_blocks=1 00:13:07.013 --rc geninfo_unexecuted_blocks=1 00:13:07.013 00:13:07.013 ' 00:13:07.013 05:15:00 blockdev_xnvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:07.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.013 --rc genhtml_branch_coverage=1 00:13:07.013 --rc genhtml_function_coverage=1 00:13:07.013 --rc genhtml_legend=1 00:13:07.013 --rc geninfo_all_blocks=1 00:13:07.013 --rc geninfo_unexecuted_blocks=1 00:13:07.013 00:13:07.013 ' 00:13:07.013 05:15:00 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:13:07.013 05:15:00 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:13:07.013 05:15:00 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:13:07.013 05:15:00 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:07.013 05:15:00 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:13:07.013 05:15:00 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:13:07.013 05:15:00 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:13:07.013 05:15:00 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:13:07.013 05:15:00 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:13:07.013 05:15:00 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:13:07.013 05:15:00 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:13:07.013 05:15:00 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:13:07.013 05:15:00 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:13:07.013 05:15:00 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:13:07.013 05:15:00 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:13:07.013 05:15:00 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:13:07.013 05:15:00 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:13:07.013 05:15:00 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:13:07.013 05:15:00 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:13:07.013 05:15:00 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:13:07.013 05:15:00 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:13:07.013 05:15:00 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:13:07.013 05:15:00 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:13:07.013 05:15:00 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:13:07.013 05:15:00 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=81203 00:13:07.013 05:15:00 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:13:07.013 05:15:00 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:13:07.013 05:15:00 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 81203 00:13:07.013 05:15:00 blockdev_xnvme -- common/autotest_common.sh@831 -- # '[' -z 81203 ']' 00:13:07.013 05:15:00 blockdev_xnvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:07.013 05:15:00 blockdev_xnvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:07.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:07.013 05:15:00 blockdev_xnvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:07.013 05:15:00 blockdev_xnvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:07.013 05:15:00 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:07.013 [2024-11-10 05:15:00.217114] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:07.013 [2024-11-10 05:15:00.217339] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81203 ] 00:13:07.274 [2024-11-10 05:15:00.383131] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:07.274 [2024-11-10 05:15:00.433465] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:08.220 05:15:01 blockdev_xnvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:08.220 05:15:01 blockdev_xnvme -- common/autotest_common.sh@864 -- # return 0 00:13:08.220 05:15:01 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:13:08.220 05:15:01 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:13:08.220 05:15:01 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:13:08.220 05:15:01 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:13:08.220 05:15:01 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:08.220 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:08.480 Waiting for block devices as requested 00:13:08.480 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:08.480 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:08.741 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:08.741 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:14.037 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:14.037 05:15:06 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:13:14.037 05:15:06 blockdev_xnvme -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:13:14.037 05:15:06 blockdev_xnvme -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:13:14.037 05:15:06 blockdev_xnvme -- common/autotest_common.sh@1656 -- # local nvme bdf 00:13:14.037 05:15:06 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:14.037 05:15:06 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:13:14.037 05:15:06 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:13:14.037 05:15:06 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:13:14.037 05:15:06 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:14.037 05:15:06 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:14.037 05:15:06 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:13:14.037 05:15:06 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:13:14.037 05:15:06 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:13:14.037 05:15:06 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:14.037 05:15:06 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:14.037 05:15:06 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:13:14.037 05:15:06 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:13:14.037 05:15:06 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:13:14.037 05:15:06 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:14.037 05:15:06 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:14.037 05:15:06 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:13:14.037 05:15:06 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:13:14.037 05:15:06 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:13:14.037 05:15:06 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:14.037 05:15:06 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:14.037 05:15:06 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:13:14.037 05:15:06 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:13:14.037 05:15:06 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:13:14.037 05:15:06 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:14.037 05:15:06 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:14.037 05:15:06 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:13:14.037 05:15:06 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:13:14.037 05:15:06 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:13:14.037 05:15:06 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:14.037 05:15:06 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:14.037 05:15:06 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:13:14.037 05:15:06 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:13:14.037 05:15:06 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:13:14.037 05:15:06 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:14.037 05:15:06 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:14.037 05:15:06 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:13:14.037 05:15:06 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:14.037 05:15:06 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:14.037 05:15:06 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:14.037 05:15:06 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:13:14.037 05:15:06 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:14.037 05:15:06 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:14.037 05:15:06 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:14.037 05:15:06 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:13:14.037 05:15:06 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:14.037 05:15:06 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:14.037 05:15:06 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:14.037 05:15:06 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:13:14.037 05:15:06 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:14.037 05:15:06 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:14.037 05:15:06 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:14.037 05:15:06 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:13:14.037 05:15:06 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:14.037 05:15:06 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:14.037 05:15:06 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:14.037 05:15:06 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:13:14.037 05:15:06 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:14.037 05:15:06 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:14.037 05:15:06 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:13:14.037 05:15:06 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:13:14.037 05:15:06 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:13:14.037 05:15:06 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.037 05:15:06 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:14.037 nvme0n1 00:13:14.037 nvme1n1 00:13:14.037 nvme2n1 00:13:14.037 nvme2n2 00:13:14.037 nvme2n3 00:13:14.037 nvme3n1 00:13:14.037 05:15:06 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.037 05:15:06 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:13:14.037 05:15:06 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.037 05:15:06 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:14.037 05:15:06 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.037 05:15:06 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:13:14.037 05:15:06 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:13:14.037 05:15:06 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.037 05:15:06 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:14.037 05:15:06 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.037 05:15:06 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:13:14.037 05:15:06 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.037 05:15:06 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:14.037 05:15:07 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.037 05:15:07 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:13:14.037 05:15:07 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.038 05:15:07 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:14.038 05:15:07 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.038 05:15:07 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:13:14.038 05:15:07 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:13:14.038 05:15:07 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.038 05:15:07 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:13:14.038 05:15:07 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:14.038 05:15:07 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.038 05:15:07 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:13:14.038 05:15:07 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:13:14.038 05:15:07 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "422c0da4-a41c-40e9-9d22-d79e49de120e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "422c0da4-a41c-40e9-9d22-d79e49de120e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "883361dc-2c0e-4914-b052-37d20bfa10cf"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "883361dc-2c0e-4914-b052-37d20bfa10cf",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "889e753f-d3fb-4b2d-bb86-ec6040e5a14e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "889e753f-d3fb-4b2d-bb86-ec6040e5a14e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "70da86ff-19d4-45a1-b407-a67c48a883e0"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "70da86ff-19d4-45a1-b407-a67c48a883e0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "c50d8aea-7950-4bbd-8673-5ea0dcc19767"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c50d8aea-7950-4bbd-8673-5ea0dcc19767",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "d5882ef1-ca60-43a1-a630-f7f1127f24a0"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "d5882ef1-ca60-43a1-a630-f7f1127f24a0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:13:14.038 05:15:07 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:13:14.038 05:15:07 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:13:14.038 05:15:07 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:13:14.038 05:15:07 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 81203 00:13:14.038 05:15:07 blockdev_xnvme -- common/autotest_common.sh@950 -- # '[' -z 81203 ']' 00:13:14.038 05:15:07 blockdev_xnvme -- common/autotest_common.sh@954 -- # kill -0 81203 00:13:14.038 05:15:07 blockdev_xnvme -- common/autotest_common.sh@955 -- # uname 00:13:14.038 05:15:07 blockdev_xnvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:14.038 05:15:07 blockdev_xnvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81203 00:13:14.038 05:15:07 blockdev_xnvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:14.038 05:15:07 blockdev_xnvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:14.038 killing process with pid 81203 00:13:14.038 05:15:07 blockdev_xnvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81203' 00:13:14.038 05:15:07 blockdev_xnvme -- common/autotest_common.sh@969 -- # kill 81203 00:13:14.038 05:15:07 blockdev_xnvme -- common/autotest_common.sh@974 -- # wait 81203 00:13:14.299 05:15:07 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:14.299 05:15:07 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:13:14.299 05:15:07 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:13:14.299 05:15:07 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:14.299 05:15:07 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:14.299 ************************************ 00:13:14.299 START TEST bdev_hello_world 00:13:14.299 ************************************ 00:13:14.299 05:15:07 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:13:14.299 [2024-11-10 05:15:07.406117] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:14.299 [2024-11-10 05:15:07.406235] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81550 ] 00:13:14.560 [2024-11-10 05:15:07.550110] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:14.560 [2024-11-10 05:15:07.590869] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:14.560 [2024-11-10 05:15:07.750069] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:13:14.560 [2024-11-10 05:15:07.750108] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:13:14.560 [2024-11-10 05:15:07.750122] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:13:14.560 [2024-11-10 05:15:07.751592] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:13:14.560 [2024-11-10 05:15:07.751931] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:13:14.560 [2024-11-10 05:15:07.751958] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:13:14.560 [2024-11-10 05:15:07.752147] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:13:14.560 00:13:14.560 [2024-11-10 05:15:07.752200] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:13:14.822 00:13:14.822 real 0m0.534s 00:13:14.822 user 0m0.292s 00:13:14.822 sys 0m0.134s 00:13:14.822 05:15:07 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:14.822 ************************************ 00:13:14.822 END TEST bdev_hello_world 00:13:14.822 ************************************ 00:13:14.822 05:15:07 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:13:14.822 05:15:07 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:13:14.822 05:15:07 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:14.822 05:15:07 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:14.822 05:15:07 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:14.822 ************************************ 00:13:14.822 START TEST bdev_bounds 00:13:14.822 ************************************ 00:13:14.822 05:15:07 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:13:14.822 Process bdevio pid: 81576 00:13:14.822 05:15:07 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=81576 00:13:14.822 05:15:07 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:13:14.822 05:15:07 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 81576' 00:13:14.822 05:15:07 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 81576 00:13:14.822 05:15:07 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:14.822 05:15:07 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 81576 ']' 00:13:14.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:14.822 05:15:07 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:14.822 05:15:07 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:14.822 05:15:07 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:14.822 05:15:07 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:14.822 05:15:07 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:14.822 [2024-11-10 05:15:07.999166] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:14.822 [2024-11-10 05:15:07.999304] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81576 ] 00:13:15.083 [2024-11-10 05:15:08.147176] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:15.083 [2024-11-10 05:15:08.189061] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:15.083 [2024-11-10 05:15:08.189106] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:15.083 [2024-11-10 05:15:08.189150] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:13:15.656 05:15:08 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:15.656 05:15:08 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:13:15.656 05:15:08 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:13:15.918 I/O targets: 00:13:15.918 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:13:15.918 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:13:15.918 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:15.918 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:15.918 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:15.918 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:13:15.918 00:13:15.918 00:13:15.918 CUnit - A unit testing framework for C - Version 2.1-3 00:13:15.918 http://cunit.sourceforge.net/ 00:13:15.918 00:13:15.918 00:13:15.918 Suite: bdevio tests on: nvme3n1 00:13:15.918 Test: blockdev write read block ...passed 00:13:15.918 Test: blockdev write zeroes read block ...passed 00:13:15.918 Test: blockdev write zeroes read no split ...passed 00:13:15.918 Test: blockdev write zeroes read split ...passed 00:13:15.918 Test: blockdev write zeroes read split partial ...passed 00:13:15.918 Test: blockdev reset ...passed 00:13:15.918 Test: blockdev write read 8 blocks ...passed 00:13:15.918 Test: blockdev write read size > 128k ...passed 00:13:15.918 Test: blockdev write read invalid size ...passed 00:13:15.918 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:15.918 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:15.918 Test: blockdev write read max offset ...passed 00:13:15.918 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:15.918 Test: blockdev writev readv 8 blocks ...passed 00:13:15.918 Test: blockdev writev readv 30 x 1block ...passed 00:13:15.918 Test: blockdev writev readv block ...passed 00:13:15.918 Test: blockdev writev readv size > 128k ...passed 00:13:15.918 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:15.918 Test: blockdev comparev and writev ...passed 00:13:15.918 Test: blockdev nvme passthru rw ...passed 00:13:15.918 Test: blockdev nvme passthru vendor specific ...passed 00:13:15.918 Test: blockdev nvme admin passthru ...passed 00:13:15.918 Test: blockdev copy ...passed 00:13:15.918 Suite: bdevio tests on: nvme2n3 00:13:15.918 Test: blockdev write read block ...passed 00:13:15.918 Test: blockdev write zeroes read block ...passed 00:13:15.918 Test: blockdev write zeroes read no split ...passed 00:13:15.918 Test: blockdev write zeroes read split ...passed 00:13:15.918 Test: blockdev write zeroes read split partial ...passed 00:13:15.918 Test: blockdev reset ...passed 00:13:15.918 Test: blockdev write read 8 blocks ...passed 00:13:15.918 Test: blockdev write read size > 128k ...passed 00:13:15.918 Test: blockdev write read invalid size ...passed 00:13:15.918 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:15.918 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:15.918 Test: blockdev write read max offset ...passed 00:13:15.918 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:15.918 Test: blockdev writev readv 8 blocks ...passed 00:13:15.918 Test: blockdev writev readv 30 x 1block ...passed 00:13:15.918 Test: blockdev writev readv block ...passed 00:13:15.918 Test: blockdev writev readv size > 128k ...passed 00:13:15.918 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:15.918 Test: blockdev comparev and writev ...passed 00:13:15.918 Test: blockdev nvme passthru rw ...passed 00:13:15.918 Test: blockdev nvme passthru vendor specific ...passed 00:13:15.918 Test: blockdev nvme admin passthru ...passed 00:13:15.918 Test: blockdev copy ...passed 00:13:15.918 Suite: bdevio tests on: nvme2n2 00:13:15.918 Test: blockdev write read block ...passed 00:13:15.918 Test: blockdev write zeroes read block ...passed 00:13:15.918 Test: blockdev write zeroes read no split ...passed 00:13:15.918 Test: blockdev write zeroes read split ...passed 00:13:15.918 Test: blockdev write zeroes read split partial ...passed 00:13:15.918 Test: blockdev reset ...passed 00:13:15.918 Test: blockdev write read 8 blocks ...passed 00:13:15.918 Test: blockdev write read size > 128k ...passed 00:13:15.918 Test: blockdev write read invalid size ...passed 00:13:15.918 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:15.918 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:15.918 Test: blockdev write read max offset ...passed 00:13:15.918 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:15.918 Test: blockdev writev readv 8 blocks ...passed 00:13:15.918 Test: blockdev writev readv 30 x 1block ...passed 00:13:15.918 Test: blockdev writev readv block ...passed 00:13:15.918 Test: blockdev writev readv size > 128k ...passed 00:13:15.918 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:15.918 Test: blockdev comparev and writev ...passed 00:13:15.918 Test: blockdev nvme passthru rw ...passed 00:13:15.918 Test: blockdev nvme passthru vendor specific ...passed 00:13:15.918 Test: blockdev nvme admin passthru ...passed 00:13:15.918 Test: blockdev copy ...passed 00:13:15.918 Suite: bdevio tests on: nvme2n1 00:13:15.918 Test: blockdev write read block ...passed 00:13:15.918 Test: blockdev write zeroes read block ...passed 00:13:15.918 Test: blockdev write zeroes read no split ...passed 00:13:15.918 Test: blockdev write zeroes read split ...passed 00:13:15.918 Test: blockdev write zeroes read split partial ...passed 00:13:15.918 Test: blockdev reset ...passed 00:13:15.918 Test: blockdev write read 8 blocks ...passed 00:13:15.918 Test: blockdev write read size > 128k ...passed 00:13:15.918 Test: blockdev write read invalid size ...passed 00:13:15.918 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:15.918 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:15.918 Test: blockdev write read max offset ...passed 00:13:15.918 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:15.918 Test: blockdev writev readv 8 blocks ...passed 00:13:15.918 Test: blockdev writev readv 30 x 1block ...passed 00:13:15.918 Test: blockdev writev readv block ...passed 00:13:15.918 Test: blockdev writev readv size > 128k ...passed 00:13:15.918 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:15.918 Test: blockdev comparev and writev ...passed 00:13:15.918 Test: blockdev nvme passthru rw ...passed 00:13:15.918 Test: blockdev nvme passthru vendor specific ...passed 00:13:15.918 Test: blockdev nvme admin passthru ...passed 00:13:15.918 Test: blockdev copy ...passed 00:13:15.918 Suite: bdevio tests on: nvme1n1 00:13:15.918 Test: blockdev write read block ...passed 00:13:15.918 Test: blockdev write zeroes read block ...passed 00:13:15.918 Test: blockdev write zeroes read no split ...passed 00:13:15.918 Test: blockdev write zeroes read split ...passed 00:13:15.918 Test: blockdev write zeroes read split partial ...passed 00:13:15.918 Test: blockdev reset ...passed 00:13:15.919 Test: blockdev write read 8 blocks ...passed 00:13:15.919 Test: blockdev write read size > 128k ...passed 00:13:15.919 Test: blockdev write read invalid size ...passed 00:13:15.919 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:15.919 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:15.919 Test: blockdev write read max offset ...passed 00:13:15.919 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:15.919 Test: blockdev writev readv 8 blocks ...passed 00:13:15.919 Test: blockdev writev readv 30 x 1block ...passed 00:13:15.919 Test: blockdev writev readv block ...passed 00:13:15.919 Test: blockdev writev readv size > 128k ...passed 00:13:15.919 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:15.919 Test: blockdev comparev and writev ...passed 00:13:15.919 Test: blockdev nvme passthru rw ...passed 00:13:15.919 Test: blockdev nvme passthru vendor specific ...passed 00:13:15.919 Test: blockdev nvme admin passthru ...passed 00:13:15.919 Test: blockdev copy ...passed 00:13:15.919 Suite: bdevio tests on: nvme0n1 00:13:15.919 Test: blockdev write read block ...passed 00:13:15.919 Test: blockdev write zeroes read block ...passed 00:13:15.919 Test: blockdev write zeroes read no split ...passed 00:13:15.919 Test: blockdev write zeroes read split ...passed 00:13:15.919 Test: blockdev write zeroes read split partial ...passed 00:13:15.919 Test: blockdev reset ...passed 00:13:15.919 Test: blockdev write read 8 blocks ...passed 00:13:15.919 Test: blockdev write read size > 128k ...passed 00:13:15.919 Test: blockdev write read invalid size ...passed 00:13:15.919 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:15.919 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:15.919 Test: blockdev write read max offset ...passed 00:13:15.919 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:15.919 Test: blockdev writev readv 8 blocks ...passed 00:13:15.919 Test: blockdev writev readv 30 x 1block ...passed 00:13:15.919 Test: blockdev writev readv block ...passed 00:13:15.919 Test: blockdev writev readv size > 128k ...passed 00:13:15.919 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:15.919 Test: blockdev comparev and writev ...passed 00:13:15.919 Test: blockdev nvme passthru rw ...passed 00:13:15.919 Test: blockdev nvme passthru vendor specific ...passed 00:13:15.919 Test: blockdev nvme admin passthru ...passed 00:13:15.919 Test: blockdev copy ...passed 00:13:15.919 00:13:15.919 Run Summary: Type Total Ran Passed Failed Inactive 00:13:15.919 suites 6 6 n/a 0 0 00:13:15.919 tests 138 138 138 0 0 00:13:15.919 asserts 780 780 780 0 n/a 00:13:15.919 00:13:15.919 Elapsed time = 0.276 seconds 00:13:15.919 0 00:13:15.919 05:15:09 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 81576 00:13:15.919 05:15:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 81576 ']' 00:13:15.919 05:15:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 81576 00:13:15.919 05:15:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:13:15.919 05:15:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:15.919 05:15:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81576 00:13:15.919 05:15:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:15.919 05:15:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:15.919 05:15:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81576' 00:13:15.919 killing process with pid 81576 00:13:15.919 05:15:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 81576 00:13:15.919 05:15:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 81576 00:13:16.181 05:15:09 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:13:16.181 00:13:16.181 real 0m1.299s 00:13:16.181 user 0m3.289s 00:13:16.181 sys 0m0.249s 00:13:16.181 05:15:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:16.181 05:15:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:16.181 ************************************ 00:13:16.181 END TEST bdev_bounds 00:13:16.181 ************************************ 00:13:16.181 05:15:09 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:13:16.181 05:15:09 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:16.181 05:15:09 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:16.181 05:15:09 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:16.181 ************************************ 00:13:16.181 START TEST bdev_nbd 00:13:16.181 ************************************ 00:13:16.181 05:15:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:13:16.181 05:15:09 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:13:16.181 05:15:09 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:13:16.181 05:15:09 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:16.181 05:15:09 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:16.181 05:15:09 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:16.181 05:15:09 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:13:16.181 05:15:09 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:13:16.181 05:15:09 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:13:16.181 05:15:09 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:13:16.181 05:15:09 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:13:16.181 05:15:09 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:13:16.181 05:15:09 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:16.181 05:15:09 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:13:16.181 05:15:09 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:16.181 05:15:09 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:13:16.181 05:15:09 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=81620 00:13:16.181 05:15:09 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:13:16.181 05:15:09 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 81620 /var/tmp/spdk-nbd.sock 00:13:16.181 05:15:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 81620 ']' 00:13:16.181 05:15:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:13:16.181 05:15:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:16.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:13:16.181 05:15:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:13:16.181 05:15:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:16.181 05:15:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:16.181 05:15:09 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:16.181 [2024-11-10 05:15:09.366217] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:16.181 [2024-11-10 05:15:09.366326] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:16.442 [2024-11-10 05:15:09.512360] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:16.442 [2024-11-10 05:15:09.541341] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:17.013 05:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:17.013 05:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:13:17.013 05:15:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:13:17.013 05:15:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:17.013 05:15:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:17.013 05:15:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:13:17.013 05:15:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:13:17.013 05:15:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:17.013 05:15:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:17.013 05:15:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:13:17.013 05:15:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:13:17.013 05:15:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:13:17.013 05:15:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:13:17.013 05:15:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:17.013 05:15:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:13:17.275 05:15:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:13:17.275 05:15:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:13:17.275 05:15:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:13:17.275 05:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:17.275 05:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:17.275 05:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:17.275 05:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:17.275 05:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:17.275 05:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:17.275 05:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:17.275 05:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:17.275 05:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:17.275 1+0 records in 00:13:17.275 1+0 records out 00:13:17.275 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000870607 s, 4.7 MB/s 00:13:17.275 05:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:17.275 05:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:17.275 05:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:17.275 05:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:17.275 05:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:17.275 05:15:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:17.275 05:15:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:17.275 05:15:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:13:17.538 05:15:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:13:17.538 05:15:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:13:17.538 05:15:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:13:17.538 05:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:17.538 05:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:17.538 05:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:17.538 05:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:17.538 05:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:17.538 05:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:17.538 05:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:17.538 05:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:17.538 05:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:17.538 1+0 records in 00:13:17.538 1+0 records out 00:13:17.538 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00128619 s, 3.2 MB/s 00:13:17.538 05:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:17.538 05:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:17.538 05:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:17.538 05:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:17.538 05:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:17.538 05:15:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:17.538 05:15:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:17.538 05:15:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:13:17.800 05:15:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:13:17.800 05:15:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:13:17.800 05:15:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:13:17.800 05:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:13:17.800 05:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:17.800 05:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:17.800 05:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:17.800 05:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:13:17.800 05:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:17.800 05:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:17.800 05:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:17.800 05:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:17.800 1+0 records in 00:13:17.800 1+0 records out 00:13:17.800 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00103218 s, 4.0 MB/s 00:13:17.800 05:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:17.800 05:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:17.800 05:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:17.800 05:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:17.800 05:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:17.800 05:15:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:17.800 05:15:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:17.800 05:15:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:13:18.062 05:15:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:13:18.062 05:15:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:13:18.062 05:15:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:13:18.062 05:15:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:13:18.062 05:15:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:18.062 05:15:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:18.062 05:15:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:18.062 05:15:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:13:18.062 05:15:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:18.062 05:15:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:18.062 05:15:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:18.062 05:15:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:18.062 1+0 records in 00:13:18.062 1+0 records out 00:13:18.062 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00112882 s, 3.6 MB/s 00:13:18.062 05:15:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:18.062 05:15:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:18.062 05:15:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:18.062 05:15:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:18.063 05:15:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:18.063 05:15:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:18.063 05:15:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:18.063 05:15:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:13:18.324 05:15:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:13:18.324 05:15:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:13:18.324 05:15:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:13:18.324 05:15:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:13:18.324 05:15:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:18.324 05:15:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:18.324 05:15:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:18.324 05:15:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:13:18.324 05:15:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:18.324 05:15:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:18.324 05:15:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:18.324 05:15:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:18.324 1+0 records in 00:13:18.324 1+0 records out 00:13:18.324 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00119357 s, 3.4 MB/s 00:13:18.324 05:15:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:18.324 05:15:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:18.324 05:15:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:18.324 05:15:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:18.324 05:15:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:18.324 05:15:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:18.324 05:15:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:18.324 05:15:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:13:18.585 05:15:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:13:18.585 05:15:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:13:18.585 05:15:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:13:18.585 05:15:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:13:18.585 05:15:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:18.585 05:15:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:18.585 05:15:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:18.585 05:15:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:13:18.585 05:15:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:18.585 05:15:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:18.585 05:15:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:18.585 05:15:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:18.585 1+0 records in 00:13:18.585 1+0 records out 00:13:18.585 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00103001 s, 4.0 MB/s 00:13:18.585 05:15:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:18.585 05:15:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:18.585 05:15:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:18.585 05:15:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:18.585 05:15:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:18.585 05:15:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:18.586 05:15:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:18.586 05:15:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:18.847 05:15:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:13:18.847 { 00:13:18.847 "nbd_device": "/dev/nbd0", 00:13:18.847 "bdev_name": "nvme0n1" 00:13:18.847 }, 00:13:18.847 { 00:13:18.847 "nbd_device": "/dev/nbd1", 00:13:18.847 "bdev_name": "nvme1n1" 00:13:18.847 }, 00:13:18.847 { 00:13:18.847 "nbd_device": "/dev/nbd2", 00:13:18.847 "bdev_name": "nvme2n1" 00:13:18.847 }, 00:13:18.847 { 00:13:18.847 "nbd_device": "/dev/nbd3", 00:13:18.847 "bdev_name": "nvme2n2" 00:13:18.847 }, 00:13:18.847 { 00:13:18.847 "nbd_device": "/dev/nbd4", 00:13:18.847 "bdev_name": "nvme2n3" 00:13:18.847 }, 00:13:18.847 { 00:13:18.847 "nbd_device": "/dev/nbd5", 00:13:18.847 "bdev_name": "nvme3n1" 00:13:18.847 } 00:13:18.847 ]' 00:13:18.847 05:15:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:13:18.847 05:15:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:13:18.847 { 00:13:18.847 "nbd_device": "/dev/nbd0", 00:13:18.847 "bdev_name": "nvme0n1" 00:13:18.847 }, 00:13:18.847 { 00:13:18.847 "nbd_device": "/dev/nbd1", 00:13:18.847 "bdev_name": "nvme1n1" 00:13:18.847 }, 00:13:18.847 { 00:13:18.847 "nbd_device": "/dev/nbd2", 00:13:18.847 "bdev_name": "nvme2n1" 00:13:18.847 }, 00:13:18.847 { 00:13:18.847 "nbd_device": "/dev/nbd3", 00:13:18.847 "bdev_name": "nvme2n2" 00:13:18.847 }, 00:13:18.847 { 00:13:18.847 "nbd_device": "/dev/nbd4", 00:13:18.847 "bdev_name": "nvme2n3" 00:13:18.847 }, 00:13:18.847 { 00:13:18.847 "nbd_device": "/dev/nbd5", 00:13:18.847 "bdev_name": "nvme3n1" 00:13:18.847 } 00:13:18.847 ]' 00:13:18.847 05:15:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:13:18.847 05:15:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:13:18.847 05:15:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:18.847 05:15:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:13:18.847 05:15:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:18.847 05:15:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:18.847 05:15:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:18.847 05:15:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:19.109 05:15:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:19.109 05:15:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:19.109 05:15:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:19.109 05:15:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:19.109 05:15:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:19.109 05:15:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:19.109 05:15:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:19.109 05:15:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:19.109 05:15:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:19.109 05:15:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:19.370 05:15:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:19.370 05:15:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:19.370 05:15:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:19.370 05:15:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:19.370 05:15:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:19.370 05:15:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:19.370 05:15:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:19.370 05:15:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:19.370 05:15:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:19.370 05:15:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:13:19.632 05:15:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:13:19.632 05:15:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:13:19.632 05:15:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:13:19.632 05:15:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:19.632 05:15:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:19.632 05:15:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:13:19.632 05:15:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:19.632 05:15:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:19.632 05:15:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:19.632 05:15:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:13:19.632 05:15:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:13:19.632 05:15:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:13:19.632 05:15:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:13:19.632 05:15:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:19.632 05:15:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:19.632 05:15:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:13:19.632 05:15:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:19.632 05:15:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:19.632 05:15:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:19.632 05:15:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:13:19.893 05:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:13:19.893 05:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:13:19.893 05:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:13:19.893 05:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:19.893 05:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:19.893 05:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:13:19.893 05:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:19.893 05:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:19.893 05:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:19.893 05:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:13:20.153 05:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:13:20.153 05:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:13:20.153 05:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:13:20.153 05:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:20.153 05:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:20.153 05:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:13:20.153 05:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:20.153 05:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:20.153 05:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:20.153 05:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:20.153 05:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:20.413 05:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:20.413 05:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:20.413 05:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:20.413 05:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:20.413 05:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:20.413 05:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:20.413 05:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:20.413 05:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:20.413 05:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:20.413 05:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:13:20.413 05:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:13:20.413 05:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:13:20.413 05:15:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:20.413 05:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:20.413 05:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:20.413 05:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:13:20.413 05:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:20.413 05:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:13:20.413 05:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:20.413 05:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:20.413 05:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:20.413 05:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:20.413 05:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:20.413 05:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:20.413 05:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:13:20.413 05:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:20.413 05:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:20.413 05:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:13:20.674 /dev/nbd0 00:13:20.674 05:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:20.674 05:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:20.674 05:15:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:20.674 05:15:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:20.674 05:15:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:20.674 05:15:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:20.674 05:15:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:20.674 05:15:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:20.674 05:15:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:20.674 05:15:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:20.674 05:15:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:20.674 1+0 records in 00:13:20.674 1+0 records out 00:13:20.674 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.001221 s, 3.4 MB/s 00:13:20.674 05:15:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:20.674 05:15:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:20.674 05:15:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:20.674 05:15:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:20.674 05:15:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:20.674 05:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:20.674 05:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:20.674 05:15:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:13:20.935 /dev/nbd1 00:13:20.935 05:15:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:20.935 05:15:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:20.935 05:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:20.935 05:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:20.935 05:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:20.935 05:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:20.935 05:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:20.935 05:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:20.935 05:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:20.935 05:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:20.935 05:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:20.935 1+0 records in 00:13:20.935 1+0 records out 00:13:20.935 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00147025 s, 2.8 MB/s 00:13:20.935 05:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:20.935 05:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:20.935 05:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:20.935 05:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:20.935 05:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:20.935 05:15:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:20.935 05:15:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:20.935 05:15:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:13:21.201 /dev/nbd10 00:13:21.201 05:15:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:13:21.201 05:15:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:13:21.201 05:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:13:21.201 05:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:21.201 05:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:21.201 05:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:21.201 05:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:13:21.201 05:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:21.201 05:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:21.201 05:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:21.201 05:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:21.201 1+0 records in 00:13:21.201 1+0 records out 00:13:21.201 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00102617 s, 4.0 MB/s 00:13:21.201 05:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:21.201 05:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:21.201 05:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:21.201 05:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:21.201 05:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:21.201 05:15:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:21.201 05:15:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:21.201 05:15:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:13:21.497 /dev/nbd11 00:13:21.497 05:15:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:13:21.497 05:15:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:13:21.497 05:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:13:21.497 05:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:21.497 05:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:21.497 05:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:21.497 05:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:13:21.497 05:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:21.497 05:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:21.497 05:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:21.497 05:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:21.497 1+0 records in 00:13:21.497 1+0 records out 00:13:21.497 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00110979 s, 3.7 MB/s 00:13:21.497 05:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:21.497 05:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:21.497 05:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:21.497 05:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:21.497 05:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:21.497 05:15:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:21.497 05:15:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:21.498 05:15:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:13:21.759 /dev/nbd12 00:13:21.759 05:15:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:13:21.759 05:15:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:13:21.759 05:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:13:21.759 05:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:21.759 05:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:21.759 05:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:21.759 05:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:13:21.759 05:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:21.759 05:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:21.759 05:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:21.759 05:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:21.759 1+0 records in 00:13:21.759 1+0 records out 00:13:21.759 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00133655 s, 3.1 MB/s 00:13:21.759 05:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:21.759 05:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:21.759 05:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:21.759 05:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:21.759 05:15:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:21.759 05:15:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:21.759 05:15:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:21.759 05:15:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:13:21.759 /dev/nbd13 00:13:22.021 05:15:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:13:22.021 05:15:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:13:22.021 05:15:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:13:22.021 05:15:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:22.021 05:15:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:22.021 05:15:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:22.021 05:15:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:13:22.021 05:15:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:22.021 05:15:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:22.021 05:15:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:22.021 05:15:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:22.021 1+0 records in 00:13:22.021 1+0 records out 00:13:22.021 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00126127 s, 3.2 MB/s 00:13:22.021 05:15:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:22.021 05:15:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:22.021 05:15:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:22.021 05:15:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:22.021 05:15:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:22.021 05:15:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:22.021 05:15:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:22.021 05:15:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:22.021 05:15:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:22.021 05:15:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:22.021 05:15:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:22.021 { 00:13:22.021 "nbd_device": "/dev/nbd0", 00:13:22.021 "bdev_name": "nvme0n1" 00:13:22.021 }, 00:13:22.021 { 00:13:22.021 "nbd_device": "/dev/nbd1", 00:13:22.021 "bdev_name": "nvme1n1" 00:13:22.021 }, 00:13:22.021 { 00:13:22.021 "nbd_device": "/dev/nbd10", 00:13:22.021 "bdev_name": "nvme2n1" 00:13:22.021 }, 00:13:22.021 { 00:13:22.021 "nbd_device": "/dev/nbd11", 00:13:22.021 "bdev_name": "nvme2n2" 00:13:22.021 }, 00:13:22.021 { 00:13:22.021 "nbd_device": "/dev/nbd12", 00:13:22.021 "bdev_name": "nvme2n3" 00:13:22.021 }, 00:13:22.021 { 00:13:22.021 "nbd_device": "/dev/nbd13", 00:13:22.021 "bdev_name": "nvme3n1" 00:13:22.021 } 00:13:22.021 ]' 00:13:22.021 05:15:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:22.021 05:15:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:22.021 { 00:13:22.021 "nbd_device": "/dev/nbd0", 00:13:22.021 "bdev_name": "nvme0n1" 00:13:22.021 }, 00:13:22.021 { 00:13:22.021 "nbd_device": "/dev/nbd1", 00:13:22.021 "bdev_name": "nvme1n1" 00:13:22.021 }, 00:13:22.021 { 00:13:22.021 "nbd_device": "/dev/nbd10", 00:13:22.021 "bdev_name": "nvme2n1" 00:13:22.021 }, 00:13:22.021 { 00:13:22.021 "nbd_device": "/dev/nbd11", 00:13:22.021 "bdev_name": "nvme2n2" 00:13:22.021 }, 00:13:22.021 { 00:13:22.021 "nbd_device": "/dev/nbd12", 00:13:22.021 "bdev_name": "nvme2n3" 00:13:22.021 }, 00:13:22.021 { 00:13:22.021 "nbd_device": "/dev/nbd13", 00:13:22.021 "bdev_name": "nvme3n1" 00:13:22.021 } 00:13:22.021 ]' 00:13:22.283 05:15:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:13:22.283 /dev/nbd1 00:13:22.283 /dev/nbd10 00:13:22.283 /dev/nbd11 00:13:22.283 /dev/nbd12 00:13:22.283 /dev/nbd13' 00:13:22.283 05:15:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:13:22.283 /dev/nbd1 00:13:22.283 /dev/nbd10 00:13:22.283 /dev/nbd11 00:13:22.283 /dev/nbd12 00:13:22.283 /dev/nbd13' 00:13:22.283 05:15:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:22.283 05:15:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:13:22.283 05:15:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:13:22.283 05:15:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:13:22.283 05:15:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:13:22.283 05:15:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:13:22.283 05:15:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:22.283 05:15:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:22.283 05:15:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:13:22.283 05:15:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:22.283 05:15:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:13:22.283 05:15:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:13:22.283 256+0 records in 00:13:22.283 256+0 records out 00:13:22.283 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0112545 s, 93.2 MB/s 00:13:22.283 05:15:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:22.283 05:15:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:13:22.545 256+0 records in 00:13:22.545 256+0 records out 00:13:22.545 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.238178 s, 4.4 MB/s 00:13:22.545 05:15:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:22.545 05:15:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:13:22.804 256+0 records in 00:13:22.804 256+0 records out 00:13:22.804 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.305733 s, 3.4 MB/s 00:13:22.804 05:15:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:22.804 05:15:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:13:23.062 256+0 records in 00:13:23.062 256+0 records out 00:13:23.062 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.238718 s, 4.4 MB/s 00:13:23.062 05:15:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:23.062 05:15:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:13:23.320 256+0 records in 00:13:23.320 256+0 records out 00:13:23.320 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.219383 s, 4.8 MB/s 00:13:23.320 05:15:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:23.320 05:15:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:13:23.320 256+0 records in 00:13:23.320 256+0 records out 00:13:23.320 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.212892 s, 4.9 MB/s 00:13:23.320 05:15:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:23.320 05:15:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:13:23.579 256+0 records in 00:13:23.579 256+0 records out 00:13:23.579 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.220408 s, 4.8 MB/s 00:13:23.579 05:15:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:13:23.579 05:15:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:23.579 05:15:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:23.579 05:15:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:13:23.579 05:15:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:23.579 05:15:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:13:23.579 05:15:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:13:23.579 05:15:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:23.579 05:15:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:13:23.579 05:15:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:23.579 05:15:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:13:23.579 05:15:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:23.579 05:15:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:13:23.579 05:15:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:23.579 05:15:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:13:23.579 05:15:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:23.579 05:15:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:13:23.579 05:15:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:23.579 05:15:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:13:23.579 05:15:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:23.579 05:15:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:23.579 05:15:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:23.579 05:15:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:23.579 05:15:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:23.579 05:15:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:23.579 05:15:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:23.579 05:15:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:23.838 05:15:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:23.838 05:15:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:23.838 05:15:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:23.839 05:15:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:23.839 05:15:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:23.839 05:15:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:23.839 05:15:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:23.839 05:15:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:23.839 05:15:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:23.839 05:15:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:24.096 05:15:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:24.096 05:15:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:24.096 05:15:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:24.096 05:15:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:24.096 05:15:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:24.096 05:15:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:24.096 05:15:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:24.096 05:15:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:24.096 05:15:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:24.096 05:15:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:13:24.354 05:15:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:13:24.354 05:15:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:13:24.354 05:15:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:13:24.354 05:15:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:24.354 05:15:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:24.354 05:15:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:13:24.354 05:15:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:24.354 05:15:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:24.354 05:15:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:24.355 05:15:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:13:24.616 05:15:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:13:24.616 05:15:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:13:24.616 05:15:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:13:24.616 05:15:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:24.616 05:15:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:24.616 05:15:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:13:24.616 05:15:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:24.616 05:15:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:24.616 05:15:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:24.616 05:15:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:13:24.616 05:15:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:13:24.616 05:15:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:13:24.616 05:15:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:13:24.616 05:15:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:24.616 05:15:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:24.616 05:15:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:13:24.616 05:15:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:24.616 05:15:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:24.616 05:15:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:24.616 05:15:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:13:24.876 05:15:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:13:24.876 05:15:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:13:24.876 05:15:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:13:24.876 05:15:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:24.876 05:15:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:24.876 05:15:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:13:24.876 05:15:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:24.876 05:15:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:24.876 05:15:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:24.876 05:15:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:24.876 05:15:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:25.134 05:15:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:25.134 05:15:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:25.134 05:15:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:25.134 05:15:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:25.134 05:15:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:25.134 05:15:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:25.134 05:15:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:25.134 05:15:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:25.134 05:15:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:25.134 05:15:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:13:25.134 05:15:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:13:25.134 05:15:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:13:25.134 05:15:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:25.134 05:15:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:25.134 05:15:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:13:25.134 05:15:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:13:25.394 malloc_lvol_verify 00:13:25.394 05:15:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:13:25.655 bdbc104e-babe-4bf3-9f62-4883dd72bb97 00:13:25.655 05:15:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:13:25.655 b9165375-0807-4394-b3b1-b26e7ace6380 00:13:25.655 05:15:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:13:25.916 /dev/nbd0 00:13:25.916 05:15:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:13:25.916 05:15:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:13:25.916 05:15:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:13:25.916 05:15:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:13:25.916 05:15:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:13:25.916 mke2fs 1.47.0 (5-Feb-2023) 00:13:25.916 Discarding device blocks: 0/4096 done 00:13:25.916 Creating filesystem with 4096 1k blocks and 1024 inodes 00:13:25.916 00:13:25.916 Allocating group tables: 0/1 done 00:13:25.916 Writing inode tables: 0/1 done 00:13:25.916 Creating journal (1024 blocks): done 00:13:25.916 Writing superblocks and filesystem accounting information: 0/1 done 00:13:25.916 00:13:25.916 05:15:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:25.916 05:15:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:25.916 05:15:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:25.916 05:15:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:25.916 05:15:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:25.916 05:15:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:25.916 05:15:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:26.177 05:15:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:26.177 05:15:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:26.177 05:15:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:26.177 05:15:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:26.177 05:15:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:26.177 05:15:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:26.177 05:15:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:26.177 05:15:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:26.177 05:15:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 81620 00:13:26.177 05:15:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 81620 ']' 00:13:26.177 05:15:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 81620 00:13:26.177 05:15:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:13:26.177 05:15:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:26.177 05:15:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81620 00:13:26.177 killing process with pid 81620 00:13:26.177 05:15:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:26.177 05:15:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:26.177 05:15:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81620' 00:13:26.177 05:15:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 81620 00:13:26.177 05:15:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 81620 00:13:26.439 ************************************ 00:13:26.439 END TEST bdev_nbd 00:13:26.439 ************************************ 00:13:26.439 05:15:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:13:26.439 00:13:26.439 real 0m10.173s 00:13:26.439 user 0m13.834s 00:13:26.439 sys 0m3.687s 00:13:26.439 05:15:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:26.439 05:15:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:26.439 05:15:19 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:13:26.439 05:15:19 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:13:26.440 05:15:19 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:13:26.440 05:15:19 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:13:26.440 05:15:19 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:26.440 05:15:19 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:26.440 05:15:19 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:26.440 ************************************ 00:13:26.440 START TEST bdev_fio 00:13:26.440 ************************************ 00:13:26.440 05:15:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:13:26.440 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:13:26.440 05:15:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:13:26.440 05:15:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:13:26.440 05:15:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:13:26.440 05:15:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:13:26.440 05:15:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:13:26.440 05:15:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:13:26.440 05:15:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:13:26.440 05:15:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:26.440 05:15:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:13:26.440 05:15:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:13:26.440 05:15:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:13:26.440 05:15:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:13:26.440 05:15:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:26.440 05:15:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:13:26.440 05:15:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:13:26.440 05:15:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:26.440 05:15:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:13:26.440 05:15:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:13:26.440 05:15:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:13:26.440 05:15:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:13:26.440 05:15:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:13:26.440 05:15:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:13:26.440 05:15:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:13:26.440 05:15:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:26.440 05:15:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:13:26.440 05:15:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:13:26.440 05:15:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:26.440 05:15:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:13:26.440 05:15:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:13:26.440 05:15:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:26.440 05:15:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:13:26.440 05:15:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:13:26.440 05:15:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:26.440 05:15:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:13:26.440 05:15:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:13:26.440 05:15:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:26.440 05:15:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:13:26.440 05:15:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:13:26.440 05:15:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:26.440 05:15:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:13:26.440 05:15:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:13:26.440 05:15:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:13:26.440 05:15:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:26.440 05:15:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:13:26.440 05:15:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:26.440 05:15:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:26.440 ************************************ 00:13:26.440 START TEST bdev_fio_rw_verify 00:13:26.440 ************************************ 00:13:26.440 05:15:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:26.440 05:15:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:26.440 05:15:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:13:26.440 05:15:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:26.440 05:15:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:13:26.440 05:15:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:26.440 05:15:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:13:26.440 05:15:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:13:26.440 05:15:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:13:26.440 05:15:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:26.440 05:15:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:13:26.440 05:15:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:13:26.440 05:15:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:26.440 05:15:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:26.440 05:15:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:13:26.440 05:15:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:26.440 05:15:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:26.701 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:26.701 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:26.701 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:26.701 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:26.701 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:26.701 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:26.701 fio-3.35 00:13:26.701 Starting 6 threads 00:13:38.959 00:13:38.959 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=82021: Sun Nov 10 05:15:30 2024 00:13:38.959 read: IOPS=19.1k, BW=74.5MiB/s (78.1MB/s)(745MiB/10002msec) 00:13:38.959 slat (usec): min=2, max=2188, avg= 5.42, stdev=14.53 00:13:38.959 clat (usec): min=77, max=8962, avg=1028.43, stdev=777.17 00:13:38.959 lat (usec): min=80, max=8977, avg=1033.85, stdev=777.71 00:13:38.959 clat percentiles (usec): 00:13:38.959 | 50.000th=[ 791], 99.000th=[ 3589], 99.900th=[ 4948], 99.990th=[ 6783], 00:13:38.959 | 99.999th=[ 8979] 00:13:38.959 write: IOPS=19.4k, BW=75.8MiB/s (79.5MB/s)(758MiB/10002msec); 0 zone resets 00:13:38.959 slat (usec): min=4, max=6028, avg=32.94, stdev=122.00 00:13:38.959 clat (usec): min=73, max=8482, avg=1185.08, stdev=882.87 00:13:38.959 lat (usec): min=92, max=8546, avg=1218.02, stdev=898.11 00:13:38.959 clat percentiles (usec): 00:13:38.960 | 50.000th=[ 906], 99.000th=[ 4047], 99.900th=[ 5407], 99.990th=[ 6718], 00:13:38.960 | 99.999th=[ 8291] 00:13:38.960 bw ( KiB/s): min=47674, max=165880, per=100.00%, avg=79109.00, stdev=5896.78, samples=114 00:13:38.960 iops : min=11916, max=41468, avg=19776.32, stdev=1474.21, samples=114 00:13:38.960 lat (usec) : 100=0.03%, 250=6.29%, 500=19.88%, 750=18.46%, 1000=12.41% 00:13:38.960 lat (msec) : 2=28.73%, 4=13.38%, 10=0.82% 00:13:38.960 cpu : usr=44.07%, sys=31.89%, ctx=6372, majf=0, minf=17943 00:13:38.960 IO depths : 1=11.7%, 2=24.2%, 4=50.8%, 8=13.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:38.960 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:38.960 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:38.960 issued rwts: total=190663,194136,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:38.960 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:38.960 00:13:38.960 Run status group 0 (all jobs): 00:13:38.960 READ: bw=74.5MiB/s (78.1MB/s), 74.5MiB/s-74.5MiB/s (78.1MB/s-78.1MB/s), io=745MiB (781MB), run=10002-10002msec 00:13:38.960 WRITE: bw=75.8MiB/s (79.5MB/s), 75.8MiB/s-75.8MiB/s (79.5MB/s-79.5MB/s), io=758MiB (795MB), run=10002-10002msec 00:13:38.960 ----------------------------------------------------- 00:13:38.960 Suppressions used: 00:13:38.960 count bytes template 00:13:38.960 6 48 /usr/src/fio/parse.c 00:13:38.960 3339 320544 /usr/src/fio/iolog.c 00:13:38.960 1 8 libtcmalloc_minimal.so 00:13:38.960 1 904 libcrypto.so 00:13:38.960 ----------------------------------------------------- 00:13:38.960 00:13:38.960 00:13:38.960 real 0m11.095s 00:13:38.960 user 0m27.112s 00:13:38.960 sys 0m19.424s 00:13:38.960 05:15:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:38.960 05:15:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:13:38.960 ************************************ 00:13:38.960 END TEST bdev_fio_rw_verify 00:13:38.960 ************************************ 00:13:38.960 05:15:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:13:38.960 05:15:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:38.960 05:15:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:13:38.960 05:15:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:38.960 05:15:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:13:38.960 05:15:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:13:38.960 05:15:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:13:38.960 05:15:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:13:38.960 05:15:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:38.960 05:15:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:13:38.960 05:15:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:13:38.960 05:15:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:38.960 05:15:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:13:38.960 05:15:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:13:38.960 05:15:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:13:38.960 05:15:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:13:38.960 05:15:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:13:38.960 05:15:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "422c0da4-a41c-40e9-9d22-d79e49de120e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "422c0da4-a41c-40e9-9d22-d79e49de120e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "883361dc-2c0e-4914-b052-37d20bfa10cf"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "883361dc-2c0e-4914-b052-37d20bfa10cf",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "889e753f-d3fb-4b2d-bb86-ec6040e5a14e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "889e753f-d3fb-4b2d-bb86-ec6040e5a14e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "70da86ff-19d4-45a1-b407-a67c48a883e0"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "70da86ff-19d4-45a1-b407-a67c48a883e0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "c50d8aea-7950-4bbd-8673-5ea0dcc19767"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c50d8aea-7950-4bbd-8673-5ea0dcc19767",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "d5882ef1-ca60-43a1-a630-f7f1127f24a0"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "d5882ef1-ca60-43a1-a630-f7f1127f24a0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:13:38.960 05:15:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:13:38.960 05:15:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:38.960 /home/vagrant/spdk_repo/spdk 00:13:38.960 05:15:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:13:38.960 05:15:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:13:38.960 05:15:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:13:38.960 00:13:38.960 real 0m11.261s 00:13:38.960 user 0m27.186s 00:13:38.960 sys 0m19.496s 00:13:38.960 05:15:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:38.960 05:15:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:38.960 ************************************ 00:13:38.960 END TEST bdev_fio 00:13:38.960 ************************************ 00:13:38.960 05:15:30 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:38.960 05:15:30 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:38.960 05:15:30 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:13:38.960 05:15:30 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:38.960 05:15:30 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:38.960 ************************************ 00:13:38.960 START TEST bdev_verify 00:13:38.960 ************************************ 00:13:38.960 05:15:30 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:38.960 [2024-11-10 05:15:30.921194] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:38.960 [2024-11-10 05:15:30.921339] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82184 ] 00:13:38.960 [2024-11-10 05:15:31.070972] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:38.960 [2024-11-10 05:15:31.126046] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:38.961 [2024-11-10 05:15:31.126122] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:38.961 Running I/O for 5 seconds... 00:13:40.475 24000.00 IOPS, 93.75 MiB/s [2024-11-10T05:15:34.655Z] 23920.00 IOPS, 93.44 MiB/s [2024-11-10T05:15:35.597Z] 24213.33 IOPS, 94.58 MiB/s [2024-11-10T05:15:36.540Z] 24624.00 IOPS, 96.19 MiB/s 00:13:43.304 Latency(us) 00:13:43.304 [2024-11-10T05:15:36.540Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:43.304 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:43.304 Verification LBA range: start 0x0 length 0xa0000 00:13:43.305 nvme0n1 : 5.02 1885.73 7.37 0.00 0.00 67748.82 8116.38 68560.74 00:13:43.305 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:43.305 Verification LBA range: start 0xa0000 length 0xa0000 00:13:43.305 nvme0n1 : 5.03 1908.73 7.46 0.00 0.00 66918.88 6024.27 71787.13 00:13:43.305 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:43.305 Verification LBA range: start 0x0 length 0xbd0bd 00:13:43.305 nvme1n1 : 5.04 2519.72 9.84 0.00 0.00 50595.99 5444.53 62107.96 00:13:43.305 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:43.305 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:13:43.305 nvme1n1 : 5.06 2422.44 9.46 0.00 0.00 52606.70 6125.10 60494.77 00:13:43.305 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:43.305 Verification LBA range: start 0x0 length 0x80000 00:13:43.305 nvme2n1 : 5.02 1910.60 7.46 0.00 0.00 66644.02 8570.09 62914.56 00:13:43.305 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:43.305 Verification LBA range: start 0x80000 length 0x80000 00:13:43.305 nvme2n1 : 5.07 1970.39 7.70 0.00 0.00 64413.78 5898.24 63317.86 00:13:43.305 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:43.305 Verification LBA range: start 0x0 length 0x80000 00:13:43.305 nvme2n2 : 5.05 1902.30 7.43 0.00 0.00 66856.99 8570.09 62914.56 00:13:43.305 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:43.305 Verification LBA range: start 0x80000 length 0x80000 00:13:43.305 nvme2n2 : 5.06 1921.00 7.50 0.00 0.00 65935.24 9175.04 65737.65 00:13:43.305 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:43.305 Verification LBA range: start 0x0 length 0x80000 00:13:43.305 nvme2n3 : 5.04 1905.31 7.44 0.00 0.00 66671.51 7461.02 65737.65 00:13:43.305 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:43.305 Verification LBA range: start 0x80000 length 0x80000 00:13:43.305 nvme2n3 : 5.06 1921.79 7.51 0.00 0.00 65857.38 8318.03 65334.35 00:13:43.305 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:43.305 Verification LBA range: start 0x0 length 0x20000 00:13:43.305 nvme3n1 : 5.04 1903.12 7.43 0.00 0.00 66625.66 4159.02 65334.35 00:13:43.305 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:43.305 Verification LBA range: start 0x20000 length 0x20000 00:13:43.305 nvme3n1 : 5.07 1918.70 7.49 0.00 0.00 65809.94 4335.46 68157.44 00:13:43.305 [2024-11-10T05:15:36.541Z] =================================================================================================================== 00:13:43.305 [2024-11-10T05:15:36.541Z] Total : 24089.82 94.10 0.00 0.00 63308.43 4159.02 71787.13 00:13:43.566 00:13:43.566 real 0m5.821s 00:13:43.566 user 0m9.350s 00:13:43.566 sys 0m1.383s 00:13:43.566 05:15:36 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:43.566 05:15:36 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:13:43.566 ************************************ 00:13:43.566 END TEST bdev_verify 00:13:43.566 ************************************ 00:13:43.566 05:15:36 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:43.566 05:15:36 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:13:43.566 05:15:36 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:43.566 05:15:36 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:43.566 ************************************ 00:13:43.566 START TEST bdev_verify_big_io 00:13:43.566 ************************************ 00:13:43.566 05:15:36 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:43.826 [2024-11-10 05:15:36.807382] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:43.826 [2024-11-10 05:15:36.807536] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82277 ] 00:13:43.826 [2024-11-10 05:15:36.957774] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:43.826 [2024-11-10 05:15:37.009957] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:43.826 [2024-11-10 05:15:37.010052] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:44.088 Running I/O for 5 seconds... 00:13:49.991 1218.00 IOPS, 76.12 MiB/s [2024-11-10T05:15:43.227Z] 2961.50 IOPS, 185.09 MiB/s [2024-11-10T05:15:43.227Z] 3068.33 IOPS, 191.77 MiB/s 00:13:49.991 Latency(us) 00:13:49.991 [2024-11-10T05:15:43.227Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:49.991 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:49.991 Verification LBA range: start 0x0 length 0xa000 00:13:49.991 nvme0n1 : 5.92 118.99 7.44 0.00 0.00 1048933.76 91548.75 1206669.00 00:13:49.991 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:49.991 Verification LBA range: start 0xa000 length 0xa000 00:13:49.991 nvme0n1 : 5.82 109.97 6.87 0.00 0.00 1104611.80 290374.89 1180857.90 00:13:49.991 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:49.991 Verification LBA range: start 0x0 length 0xbd0b 00:13:49.991 nvme1n1 : 5.90 192.60 12.04 0.00 0.00 631926.95 7713.08 993727.41 00:13:49.991 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:49.991 Verification LBA range: start 0xbd0b length 0xbd0b 00:13:49.991 nvme1n1 : 5.90 151.86 9.49 0.00 0.00 797311.44 9477.51 864671.90 00:13:49.991 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:49.991 Verification LBA range: start 0x0 length 0x8000 00:13:49.991 nvme2n1 : 5.90 151.80 9.49 0.00 0.00 780646.29 100824.62 1025991.29 00:13:49.991 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:49.991 Verification LBA range: start 0x8000 length 0x8000 00:13:49.991 nvme2n1 : 5.90 100.29 6.27 0.00 0.00 1169635.03 80256.39 1897115.96 00:13:49.991 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:49.991 Verification LBA range: start 0x0 length 0x8000 00:13:49.991 nvme2n2 : 5.91 94.80 5.92 0.00 0.00 1214896.29 121796.14 2232660.28 00:13:49.991 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:49.991 Verification LBA range: start 0x8000 length 0x8000 00:13:49.991 nvme2n2 : 5.91 162.57 10.16 0.00 0.00 697401.34 56461.78 751748.33 00:13:49.991 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:49.991 Verification LBA range: start 0x0 length 0x8000 00:13:49.991 nvme2n3 : 5.91 127.21 7.95 0.00 0.00 880307.23 93565.24 1910021.51 00:13:49.991 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:49.991 Verification LBA range: start 0x8000 length 0x8000 00:13:49.991 nvme2n3 : 5.89 108.63 6.79 0.00 0.00 1016781.59 120182.94 1884210.41 00:13:49.991 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:49.991 Verification LBA range: start 0x0 length 0x2000 00:13:49.991 nvme3n1 : 5.92 135.24 8.45 0.00 0.00 812928.50 2482.81 1606741.07 00:13:49.991 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:49.991 Verification LBA range: start 0x2000 length 0x2000 00:13:49.991 nvme3n1 : 5.91 192.73 12.05 0.00 0.00 561684.91 5419.32 1135688.47 00:13:49.991 [2024-11-10T05:15:43.227Z] =================================================================================================================== 00:13:49.991 [2024-11-10T05:15:43.227Z] Total : 1646.69 102.92 0.00 0.00 846343.22 2482.81 2232660.28 00:13:50.252 00:13:50.252 real 0m6.700s 00:13:50.252 user 0m12.219s 00:13:50.252 sys 0m0.488s 00:13:50.252 ************************************ 00:13:50.252 END TEST bdev_verify_big_io 00:13:50.252 05:15:43 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:50.252 05:15:43 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.252 ************************************ 00:13:50.514 05:15:43 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:50.514 05:15:43 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:13:50.514 05:15:43 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:50.514 05:15:43 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:50.514 ************************************ 00:13:50.514 START TEST bdev_write_zeroes 00:13:50.514 ************************************ 00:13:50.514 05:15:43 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:50.514 [2024-11-10 05:15:43.571450] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:50.514 [2024-11-10 05:15:43.571593] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82378 ] 00:13:50.514 [2024-11-10 05:15:43.723896] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:50.775 [2024-11-10 05:15:43.773222] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:50.775 Running I/O for 1 seconds... 00:13:52.163 82241.00 IOPS, 321.25 MiB/s 00:13:52.163 Latency(us) 00:13:52.163 [2024-11-10T05:15:45.399Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:52.163 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:52.163 nvme0n1 : 1.02 13436.18 52.49 0.00 0.00 9516.63 5847.83 22282.24 00:13:52.163 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:52.163 nvme1n1 : 1.03 14248.43 55.66 0.00 0.00 8965.77 4688.34 18955.03 00:13:52.163 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:52.163 nvme2n1 : 1.03 13423.03 52.43 0.00 0.00 9461.29 5091.64 23592.96 00:13:52.163 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:52.163 nvme2n2 : 1.03 13346.85 52.14 0.00 0.00 9508.22 5091.64 22383.06 00:13:52.163 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:52.163 nvme2n3 : 1.03 13331.56 52.08 0.00 0.00 9510.70 4990.82 22483.89 00:13:52.163 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:52.163 nvme3n1 : 1.03 13407.69 52.37 0.00 0.00 9446.13 4587.52 23391.31 00:13:52.163 [2024-11-10T05:15:45.399Z] =================================================================================================================== 00:13:52.163 [2024-11-10T05:15:45.399Z] Total : 81193.73 317.16 0.00 0.00 9396.59 4587.52 23592.96 00:13:52.163 00:13:52.163 real 0m1.747s 00:13:52.163 user 0m1.088s 00:13:52.163 sys 0m0.480s 00:13:52.163 ************************************ 00:13:52.163 END TEST bdev_write_zeroes 00:13:52.163 ************************************ 00:13:52.163 05:15:45 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:52.163 05:15:45 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:13:52.163 05:15:45 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:52.163 05:15:45 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:13:52.163 05:15:45 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:52.163 05:15:45 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:52.163 ************************************ 00:13:52.163 START TEST bdev_json_nonenclosed 00:13:52.163 ************************************ 00:13:52.163 05:15:45 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:52.163 [2024-11-10 05:15:45.389509] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:52.163 [2024-11-10 05:15:45.389657] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82413 ] 00:13:52.425 [2024-11-10 05:15:45.540492] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:52.425 [2024-11-10 05:15:45.590553] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:52.425 [2024-11-10 05:15:45.590675] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:13:52.425 [2024-11-10 05:15:45.590697] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:52.425 [2024-11-10 05:15:45.590713] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:52.686 00:13:52.686 real 0m0.377s 00:13:52.686 user 0m0.153s 00:13:52.686 sys 0m0.119s 00:13:52.686 ************************************ 00:13:52.686 END TEST bdev_json_nonenclosed 00:13:52.686 ************************************ 00:13:52.686 05:15:45 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:52.686 05:15:45 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:13:52.686 05:15:45 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:52.686 05:15:45 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:13:52.686 05:15:45 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:52.686 05:15:45 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:52.686 ************************************ 00:13:52.686 START TEST bdev_json_nonarray 00:13:52.686 ************************************ 00:13:52.686 05:15:45 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:52.686 [2024-11-10 05:15:45.827355] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:52.686 [2024-11-10 05:15:45.827506] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82440 ] 00:13:52.948 [2024-11-10 05:15:45.979712] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:52.948 [2024-11-10 05:15:46.030291] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:52.948 [2024-11-10 05:15:46.030421] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:13:52.948 [2024-11-10 05:15:46.030440] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:52.948 [2024-11-10 05:15:46.030453] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:52.948 00:13:52.948 real 0m0.375s 00:13:52.948 user 0m0.158s 00:13:52.948 sys 0m0.112s 00:13:52.948 ************************************ 00:13:52.948 END TEST bdev_json_nonarray 00:13:52.948 ************************************ 00:13:52.948 05:15:46 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:52.948 05:15:46 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:13:53.209 05:15:46 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:13:53.209 05:15:46 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:13:53.209 05:15:46 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:13:53.209 05:15:46 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:13:53.209 05:15:46 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:13:53.209 05:15:46 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:13:53.209 05:15:46 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:53.209 05:15:46 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:13:53.209 05:15:46 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:13:53.209 05:15:46 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:13:53.209 05:15:46 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:13:53.209 05:15:46 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:53.469 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:58.759 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:58.759 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:58.759 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:58.759 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:58.759 00:13:58.759 real 0m51.969s 00:13:58.759 user 1m15.695s 00:13:58.759 sys 0m35.541s 00:13:58.759 05:15:51 blockdev_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:58.759 05:15:51 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:58.759 ************************************ 00:13:58.759 END TEST blockdev_xnvme 00:13:58.759 ************************************ 00:13:58.759 05:15:51 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:13:58.759 05:15:51 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:58.759 05:15:51 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:58.759 05:15:51 -- common/autotest_common.sh@10 -- # set +x 00:13:58.759 ************************************ 00:13:58.759 START TEST ublk 00:13:58.759 ************************************ 00:13:58.759 05:15:51 ublk -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:13:59.020 * Looking for test storage... 00:13:59.020 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:13:59.020 05:15:52 ublk -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:59.020 05:15:52 ublk -- common/autotest_common.sh@1681 -- # lcov --version 00:13:59.020 05:15:52 ublk -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:59.020 05:15:52 ublk -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:59.020 05:15:52 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:59.021 05:15:52 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:59.021 05:15:52 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:59.021 05:15:52 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:13:59.021 05:15:52 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:13:59.021 05:15:52 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:13:59.021 05:15:52 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:13:59.021 05:15:52 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:13:59.021 05:15:52 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:13:59.021 05:15:52 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:13:59.021 05:15:52 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:59.021 05:15:52 ublk -- scripts/common.sh@344 -- # case "$op" in 00:13:59.021 05:15:52 ublk -- scripts/common.sh@345 -- # : 1 00:13:59.021 05:15:52 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:59.021 05:15:52 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:59.021 05:15:52 ublk -- scripts/common.sh@365 -- # decimal 1 00:13:59.021 05:15:52 ublk -- scripts/common.sh@353 -- # local d=1 00:13:59.021 05:15:52 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:59.021 05:15:52 ublk -- scripts/common.sh@355 -- # echo 1 00:13:59.021 05:15:52 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:13:59.021 05:15:52 ublk -- scripts/common.sh@366 -- # decimal 2 00:13:59.021 05:15:52 ublk -- scripts/common.sh@353 -- # local d=2 00:13:59.021 05:15:52 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:59.021 05:15:52 ublk -- scripts/common.sh@355 -- # echo 2 00:13:59.021 05:15:52 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:13:59.021 05:15:52 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:59.021 05:15:52 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:59.021 05:15:52 ublk -- scripts/common.sh@368 -- # return 0 00:13:59.021 05:15:52 ublk -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:59.021 05:15:52 ublk -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:59.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:59.021 --rc genhtml_branch_coverage=1 00:13:59.021 --rc genhtml_function_coverage=1 00:13:59.021 --rc genhtml_legend=1 00:13:59.021 --rc geninfo_all_blocks=1 00:13:59.021 --rc geninfo_unexecuted_blocks=1 00:13:59.021 00:13:59.021 ' 00:13:59.021 05:15:52 ublk -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:59.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:59.021 --rc genhtml_branch_coverage=1 00:13:59.021 --rc genhtml_function_coverage=1 00:13:59.021 --rc genhtml_legend=1 00:13:59.021 --rc geninfo_all_blocks=1 00:13:59.021 --rc geninfo_unexecuted_blocks=1 00:13:59.021 00:13:59.021 ' 00:13:59.021 05:15:52 ublk -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:59.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:59.021 --rc genhtml_branch_coverage=1 00:13:59.021 --rc genhtml_function_coverage=1 00:13:59.021 --rc genhtml_legend=1 00:13:59.021 --rc geninfo_all_blocks=1 00:13:59.021 --rc geninfo_unexecuted_blocks=1 00:13:59.021 00:13:59.021 ' 00:13:59.021 05:15:52 ublk -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:59.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:59.021 --rc genhtml_branch_coverage=1 00:13:59.021 --rc genhtml_function_coverage=1 00:13:59.021 --rc genhtml_legend=1 00:13:59.021 --rc geninfo_all_blocks=1 00:13:59.021 --rc geninfo_unexecuted_blocks=1 00:13:59.021 00:13:59.021 ' 00:13:59.021 05:15:52 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:13:59.021 05:15:52 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:13:59.021 05:15:52 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:13:59.021 05:15:52 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:13:59.021 05:15:52 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:13:59.021 05:15:52 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:13:59.021 05:15:52 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:13:59.021 05:15:52 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:13:59.021 05:15:52 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:13:59.021 05:15:52 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:13:59.021 05:15:52 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:13:59.021 05:15:52 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:13:59.021 05:15:52 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:13:59.021 05:15:52 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:13:59.021 05:15:52 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:13:59.021 05:15:52 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:13:59.021 05:15:52 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:13:59.021 05:15:52 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:13:59.021 05:15:52 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:13:59.021 05:15:52 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:13:59.021 05:15:52 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:59.021 05:15:52 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:59.021 05:15:52 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:59.021 ************************************ 00:13:59.021 START TEST test_save_ublk_config 00:13:59.021 ************************************ 00:13:59.021 05:15:52 ublk.test_save_ublk_config -- common/autotest_common.sh@1125 -- # test_save_config 00:13:59.021 05:15:52 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:13:59.021 05:15:52 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=82723 00:13:59.021 05:15:52 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:13:59.021 05:15:52 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 82723 00:13:59.021 05:15:52 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 82723 ']' 00:13:59.021 05:15:52 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:59.021 05:15:52 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:13:59.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:59.021 05:15:52 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:59.021 05:15:52 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:59.021 05:15:52 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:59.021 05:15:52 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:59.282 [2024-11-10 05:15:52.261378] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:59.282 [2024-11-10 05:15:52.261544] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82723 ] 00:13:59.282 [2024-11-10 05:15:52.415523] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:59.282 [2024-11-10 05:15:52.473621] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:00.225 05:15:53 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:00.225 05:15:53 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:14:00.225 05:15:53 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:14:00.225 05:15:53 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:14:00.225 05:15:53 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.225 05:15:53 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:00.225 [2024-11-10 05:15:53.110014] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:00.225 [2024-11-10 05:15:53.110377] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:00.225 malloc0 00:14:00.225 [2024-11-10 05:15:53.142132] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:14:00.225 [2024-11-10 05:15:53.142237] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:14:00.225 [2024-11-10 05:15:53.142247] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:00.225 [2024-11-10 05:15:53.142259] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:00.225 [2024-11-10 05:15:53.151114] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:00.225 [2024-11-10 05:15:53.151154] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:00.225 [2024-11-10 05:15:53.158027] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:00.225 [2024-11-10 05:15:53.158147] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:00.225 [2024-11-10 05:15:53.175024] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:00.225 0 00:14:00.225 05:15:53 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.225 05:15:53 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:14:00.225 05:15:53 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.225 05:15:53 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:00.487 05:15:53 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.487 05:15:53 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:14:00.487 "subsystems": [ 00:14:00.487 { 00:14:00.487 "subsystem": "fsdev", 00:14:00.487 "config": [ 00:14:00.487 { 00:14:00.487 "method": "fsdev_set_opts", 00:14:00.487 "params": { 00:14:00.487 "fsdev_io_pool_size": 65535, 00:14:00.487 "fsdev_io_cache_size": 256 00:14:00.487 } 00:14:00.487 } 00:14:00.487 ] 00:14:00.487 }, 00:14:00.487 { 00:14:00.487 "subsystem": "keyring", 00:14:00.487 "config": [] 00:14:00.487 }, 00:14:00.487 { 00:14:00.487 "subsystem": "iobuf", 00:14:00.487 "config": [ 00:14:00.487 { 00:14:00.487 "method": "iobuf_set_options", 00:14:00.487 "params": { 00:14:00.487 "small_pool_count": 8192, 00:14:00.487 "large_pool_count": 1024, 00:14:00.487 "small_bufsize": 8192, 00:14:00.487 "large_bufsize": 135168 00:14:00.487 } 00:14:00.487 } 00:14:00.487 ] 00:14:00.487 }, 00:14:00.487 { 00:14:00.487 "subsystem": "sock", 00:14:00.487 "config": [ 00:14:00.487 { 00:14:00.487 "method": "sock_set_default_impl", 00:14:00.487 "params": { 00:14:00.487 "impl_name": "posix" 00:14:00.487 } 00:14:00.487 }, 00:14:00.487 { 00:14:00.487 "method": "sock_impl_set_options", 00:14:00.487 "params": { 00:14:00.487 "impl_name": "ssl", 00:14:00.487 "recv_buf_size": 4096, 00:14:00.487 "send_buf_size": 4096, 00:14:00.487 "enable_recv_pipe": true, 00:14:00.487 "enable_quickack": false, 00:14:00.487 "enable_placement_id": 0, 00:14:00.487 "enable_zerocopy_send_server": true, 00:14:00.487 "enable_zerocopy_send_client": false, 00:14:00.487 "zerocopy_threshold": 0, 00:14:00.487 "tls_version": 0, 00:14:00.487 "enable_ktls": false 00:14:00.487 } 00:14:00.487 }, 00:14:00.487 { 00:14:00.487 "method": "sock_impl_set_options", 00:14:00.487 "params": { 00:14:00.487 "impl_name": "posix", 00:14:00.487 "recv_buf_size": 2097152, 00:14:00.487 "send_buf_size": 2097152, 00:14:00.487 "enable_recv_pipe": true, 00:14:00.487 "enable_quickack": false, 00:14:00.487 "enable_placement_id": 0, 00:14:00.487 "enable_zerocopy_send_server": true, 00:14:00.487 "enable_zerocopy_send_client": false, 00:14:00.487 "zerocopy_threshold": 0, 00:14:00.487 "tls_version": 0, 00:14:00.487 "enable_ktls": false 00:14:00.487 } 00:14:00.487 } 00:14:00.487 ] 00:14:00.487 }, 00:14:00.487 { 00:14:00.487 "subsystem": "vmd", 00:14:00.487 "config": [] 00:14:00.487 }, 00:14:00.487 { 00:14:00.487 "subsystem": "accel", 00:14:00.487 "config": [ 00:14:00.487 { 00:14:00.487 "method": "accel_set_options", 00:14:00.487 "params": { 00:14:00.487 "small_cache_size": 128, 00:14:00.487 "large_cache_size": 16, 00:14:00.487 "task_count": 2048, 00:14:00.487 "sequence_count": 2048, 00:14:00.487 "buf_count": 2048 00:14:00.487 } 00:14:00.487 } 00:14:00.487 ] 00:14:00.487 }, 00:14:00.487 { 00:14:00.487 "subsystem": "bdev", 00:14:00.487 "config": [ 00:14:00.487 { 00:14:00.487 "method": "bdev_set_options", 00:14:00.487 "params": { 00:14:00.487 "bdev_io_pool_size": 65535, 00:14:00.487 "bdev_io_cache_size": 256, 00:14:00.487 "bdev_auto_examine": true, 00:14:00.487 "iobuf_small_cache_size": 128, 00:14:00.487 "iobuf_large_cache_size": 16 00:14:00.487 } 00:14:00.487 }, 00:14:00.487 { 00:14:00.487 "method": "bdev_raid_set_options", 00:14:00.487 "params": { 00:14:00.487 "process_window_size_kb": 1024, 00:14:00.487 "process_max_bandwidth_mb_sec": 0 00:14:00.487 } 00:14:00.487 }, 00:14:00.487 { 00:14:00.487 "method": "bdev_iscsi_set_options", 00:14:00.487 "params": { 00:14:00.487 "timeout_sec": 30 00:14:00.487 } 00:14:00.487 }, 00:14:00.487 { 00:14:00.487 "method": "bdev_nvme_set_options", 00:14:00.487 "params": { 00:14:00.487 "action_on_timeout": "none", 00:14:00.487 "timeout_us": 0, 00:14:00.487 "timeout_admin_us": 0, 00:14:00.487 "keep_alive_timeout_ms": 10000, 00:14:00.487 "arbitration_burst": 0, 00:14:00.487 "low_priority_weight": 0, 00:14:00.487 "medium_priority_weight": 0, 00:14:00.487 "high_priority_weight": 0, 00:14:00.487 "nvme_adminq_poll_period_us": 10000, 00:14:00.487 "nvme_ioq_poll_period_us": 0, 00:14:00.487 "io_queue_requests": 0, 00:14:00.487 "delay_cmd_submit": true, 00:14:00.487 "transport_retry_count": 4, 00:14:00.487 "bdev_retry_count": 3, 00:14:00.487 "transport_ack_timeout": 0, 00:14:00.487 "ctrlr_loss_timeout_sec": 0, 00:14:00.487 "reconnect_delay_sec": 0, 00:14:00.487 "fast_io_fail_timeout_sec": 0, 00:14:00.487 "disable_auto_failback": false, 00:14:00.487 "generate_uuids": false, 00:14:00.487 "transport_tos": 0, 00:14:00.487 "nvme_error_stat": false, 00:14:00.487 "rdma_srq_size": 0, 00:14:00.487 "io_path_stat": false, 00:14:00.487 "allow_accel_sequence": false, 00:14:00.487 "rdma_max_cq_size": 0, 00:14:00.487 "rdma_cm_event_timeout_ms": 0, 00:14:00.487 "dhchap_digests": [ 00:14:00.487 "sha256", 00:14:00.487 "sha384", 00:14:00.487 "sha512" 00:14:00.487 ], 00:14:00.487 "dhchap_dhgroups": [ 00:14:00.487 "null", 00:14:00.487 "ffdhe2048", 00:14:00.487 "ffdhe3072", 00:14:00.487 "ffdhe4096", 00:14:00.487 "ffdhe6144", 00:14:00.487 "ffdhe8192" 00:14:00.487 ] 00:14:00.487 } 00:14:00.487 }, 00:14:00.487 { 00:14:00.487 "method": "bdev_nvme_set_hotplug", 00:14:00.487 "params": { 00:14:00.487 "period_us": 100000, 00:14:00.487 "enable": false 00:14:00.487 } 00:14:00.487 }, 00:14:00.487 { 00:14:00.487 "method": "bdev_malloc_create", 00:14:00.487 "params": { 00:14:00.487 "name": "malloc0", 00:14:00.487 "num_blocks": 8192, 00:14:00.487 "block_size": 4096, 00:14:00.487 "physical_block_size": 4096, 00:14:00.487 "uuid": "bff8fa2d-1722-47ee-b5dd-5cf16b6f3fc3", 00:14:00.487 "optimal_io_boundary": 0, 00:14:00.487 "md_size": 0, 00:14:00.487 "dif_type": 0, 00:14:00.488 "dif_is_head_of_md": false, 00:14:00.488 "dif_pi_format": 0 00:14:00.488 } 00:14:00.488 }, 00:14:00.488 { 00:14:00.488 "method": "bdev_wait_for_examine" 00:14:00.488 } 00:14:00.488 ] 00:14:00.488 }, 00:14:00.488 { 00:14:00.488 "subsystem": "scsi", 00:14:00.488 "config": null 00:14:00.488 }, 00:14:00.488 { 00:14:00.488 "subsystem": "scheduler", 00:14:00.488 "config": [ 00:14:00.488 { 00:14:00.488 "method": "framework_set_scheduler", 00:14:00.488 "params": { 00:14:00.488 "name": "static" 00:14:00.488 } 00:14:00.488 } 00:14:00.488 ] 00:14:00.488 }, 00:14:00.488 { 00:14:00.488 "subsystem": "vhost_scsi", 00:14:00.488 "config": [] 00:14:00.488 }, 00:14:00.488 { 00:14:00.488 "subsystem": "vhost_blk", 00:14:00.488 "config": [] 00:14:00.488 }, 00:14:00.488 { 00:14:00.488 "subsystem": "ublk", 00:14:00.488 "config": [ 00:14:00.488 { 00:14:00.488 "method": "ublk_create_target", 00:14:00.488 "params": { 00:14:00.488 "cpumask": "1" 00:14:00.488 } 00:14:00.488 }, 00:14:00.488 { 00:14:00.488 "method": "ublk_start_disk", 00:14:00.488 "params": { 00:14:00.488 "bdev_name": "malloc0", 00:14:00.488 "ublk_id": 0, 00:14:00.488 "num_queues": 1, 00:14:00.488 "queue_depth": 128 00:14:00.488 } 00:14:00.488 } 00:14:00.488 ] 00:14:00.488 }, 00:14:00.488 { 00:14:00.488 "subsystem": "nbd", 00:14:00.488 "config": [] 00:14:00.488 }, 00:14:00.488 { 00:14:00.488 "subsystem": "nvmf", 00:14:00.488 "config": [ 00:14:00.488 { 00:14:00.488 "method": "nvmf_set_config", 00:14:00.488 "params": { 00:14:00.488 "discovery_filter": "match_any", 00:14:00.488 "admin_cmd_passthru": { 00:14:00.488 "identify_ctrlr": false 00:14:00.488 }, 00:14:00.488 "dhchap_digests": [ 00:14:00.488 "sha256", 00:14:00.488 "sha384", 00:14:00.488 "sha512" 00:14:00.488 ], 00:14:00.488 "dhchap_dhgroups": [ 00:14:00.488 "null", 00:14:00.488 "ffdhe2048", 00:14:00.488 "ffdhe3072", 00:14:00.488 "ffdhe4096", 00:14:00.488 "ffdhe6144", 00:14:00.488 "ffdhe8192" 00:14:00.488 ] 00:14:00.488 } 00:14:00.488 }, 00:14:00.488 { 00:14:00.488 "method": "nvmf_set_max_subsystems", 00:14:00.488 "params": { 00:14:00.488 "max_subsystems": 1024 00:14:00.488 } 00:14:00.488 }, 00:14:00.488 { 00:14:00.488 "method": "nvmf_set_crdt", 00:14:00.488 "params": { 00:14:00.488 "crdt1": 0, 00:14:00.488 "crdt2": 0, 00:14:00.488 "crdt3": 0 00:14:00.488 } 00:14:00.488 } 00:14:00.488 ] 00:14:00.488 }, 00:14:00.488 { 00:14:00.488 "subsystem": "iscsi", 00:14:00.488 "config": [ 00:14:00.488 { 00:14:00.488 "method": "iscsi_set_options", 00:14:00.488 "params": { 00:14:00.488 "node_base": "iqn.2016-06.io.spdk", 00:14:00.488 "max_sessions": 128, 00:14:00.488 "max_connections_per_session": 2, 00:14:00.488 "max_queue_depth": 64, 00:14:00.488 "default_time2wait": 2, 00:14:00.488 "default_time2retain": 20, 00:14:00.488 "first_burst_length": 8192, 00:14:00.488 "immediate_data": true, 00:14:00.488 "allow_duplicated_isid": false, 00:14:00.488 "error_recovery_level": 0, 00:14:00.488 "nop_timeout": 60, 00:14:00.488 "nop_in_interval": 30, 00:14:00.488 "disable_chap": false, 00:14:00.488 "require_chap": false, 00:14:00.488 "mutual_chap": false, 00:14:00.488 "chap_group": 0, 00:14:00.488 "max_large_datain_per_connection": 64, 00:14:00.488 "max_r2t_per_connection": 4, 00:14:00.488 "pdu_pool_size": 36864, 00:14:00.488 "immediate_data_pool_size": 16384, 00:14:00.488 "data_out_pool_size": 2048 00:14:00.488 } 00:14:00.488 } 00:14:00.488 ] 00:14:00.488 } 00:14:00.488 ] 00:14:00.488 }' 00:14:00.488 05:15:53 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 82723 00:14:00.488 05:15:53 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 82723 ']' 00:14:00.488 05:15:53 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 82723 00:14:00.488 05:15:53 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:14:00.488 05:15:53 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:00.488 05:15:53 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82723 00:14:00.488 05:15:53 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:00.488 killing process with pid 82723 00:14:00.488 05:15:53 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:00.488 05:15:53 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82723' 00:14:00.488 05:15:53 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 82723 00:14:00.488 05:15:53 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 82723 00:14:00.749 [2024-11-10 05:15:53.781486] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:00.749 [2024-11-10 05:15:53.829034] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:00.749 [2024-11-10 05:15:53.829206] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:00.749 [2024-11-10 05:15:53.840036] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:00.749 [2024-11-10 05:15:53.840114] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:00.749 [2024-11-10 05:15:53.840123] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:00.749 [2024-11-10 05:15:53.840158] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:00.749 [2024-11-10 05:15:53.840304] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:01.340 05:15:54 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=82762 00:14:01.340 05:15:54 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 82762 00:14:01.340 05:15:54 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 82762 ']' 00:14:01.340 05:15:54 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:01.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:01.340 05:15:54 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:14:01.340 05:15:54 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:01.340 05:15:54 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:01.340 05:15:54 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:01.340 05:15:54 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:01.340 05:15:54 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:14:01.340 "subsystems": [ 00:14:01.340 { 00:14:01.340 "subsystem": "fsdev", 00:14:01.340 "config": [ 00:14:01.340 { 00:14:01.340 "method": "fsdev_set_opts", 00:14:01.340 "params": { 00:14:01.340 "fsdev_io_pool_size": 65535, 00:14:01.340 "fsdev_io_cache_size": 256 00:14:01.340 } 00:14:01.340 } 00:14:01.340 ] 00:14:01.340 }, 00:14:01.340 { 00:14:01.340 "subsystem": "keyring", 00:14:01.340 "config": [] 00:14:01.340 }, 00:14:01.340 { 00:14:01.340 "subsystem": "iobuf", 00:14:01.340 "config": [ 00:14:01.340 { 00:14:01.340 "method": "iobuf_set_options", 00:14:01.340 "params": { 00:14:01.340 "small_pool_count": 8192, 00:14:01.340 "large_pool_count": 1024, 00:14:01.340 "small_bufsize": 8192, 00:14:01.340 "large_bufsize": 135168 00:14:01.340 } 00:14:01.340 } 00:14:01.340 ] 00:14:01.340 }, 00:14:01.340 { 00:14:01.340 "subsystem": "sock", 00:14:01.340 "config": [ 00:14:01.340 { 00:14:01.340 "method": "sock_set_default_impl", 00:14:01.340 "params": { 00:14:01.340 "impl_name": "posix" 00:14:01.340 } 00:14:01.340 }, 00:14:01.340 { 00:14:01.340 "method": "sock_impl_set_options", 00:14:01.340 "params": { 00:14:01.340 "impl_name": "ssl", 00:14:01.340 "recv_buf_size": 4096, 00:14:01.340 "send_buf_size": 4096, 00:14:01.340 "enable_recv_pipe": true, 00:14:01.340 "enable_quickack": false, 00:14:01.340 "enable_placement_id": 0, 00:14:01.340 "enable_zerocopy_send_server": true, 00:14:01.340 "enable_zerocopy_send_client": false, 00:14:01.340 "zerocopy_threshold": 0, 00:14:01.340 "tls_version": 0, 00:14:01.340 "enable_ktls": false 00:14:01.340 } 00:14:01.340 }, 00:14:01.340 { 00:14:01.340 "method": "sock_impl_set_options", 00:14:01.340 "params": { 00:14:01.340 "impl_name": "posix", 00:14:01.340 "recv_buf_size": 2097152, 00:14:01.340 "send_buf_size": 2097152, 00:14:01.340 "enable_recv_pipe": true, 00:14:01.340 "enable_quickack": false, 00:14:01.340 "enable_placement_id": 0, 00:14:01.340 "enable_zerocopy_send_server": true, 00:14:01.340 "enable_zerocopy_send_client": false, 00:14:01.340 "zerocopy_threshold": 0, 00:14:01.340 "tls_version": 0, 00:14:01.340 "enable_ktls": false 00:14:01.340 } 00:14:01.340 } 00:14:01.340 ] 00:14:01.340 }, 00:14:01.340 { 00:14:01.340 "subsystem": "vmd", 00:14:01.340 "config": [] 00:14:01.341 }, 00:14:01.341 { 00:14:01.341 "subsystem": "accel", 00:14:01.341 "config": [ 00:14:01.341 { 00:14:01.341 "method": "accel_set_options", 00:14:01.341 "params": { 00:14:01.341 "small_cache_size": 128, 00:14:01.341 "large_cache_size": 16, 00:14:01.341 "task_count": 2048, 00:14:01.341 "sequence_count": 2048, 00:14:01.341 "buf_count": 2048 00:14:01.341 } 00:14:01.341 } 00:14:01.341 ] 00:14:01.341 }, 00:14:01.341 { 00:14:01.341 "subsystem": "bdev", 00:14:01.341 "config": [ 00:14:01.341 { 00:14:01.341 "method": "bdev_set_options", 00:14:01.341 "params": { 00:14:01.341 "bdev_io_pool_size": 65535, 00:14:01.341 "bdev_io_cache_size": 256, 00:14:01.341 "bdev_auto_examine": true, 00:14:01.341 "iobuf_small_cache_size": 128, 00:14:01.341 "iobuf_large_cache_size": 16 00:14:01.341 } 00:14:01.341 }, 00:14:01.341 { 00:14:01.341 "method": "bdev_raid_set_options", 00:14:01.341 "params": { 00:14:01.341 "process_window_size_kb": 1024, 00:14:01.341 "process_max_bandwidth_mb_sec": 0 00:14:01.341 } 00:14:01.341 }, 00:14:01.341 { 00:14:01.341 "method": "bdev_iscsi_set_options", 00:14:01.341 "params": { 00:14:01.341 "timeout_sec": 30 00:14:01.341 } 00:14:01.341 }, 00:14:01.341 { 00:14:01.341 "method": "bdev_nvme_set_options", 00:14:01.341 "params": { 00:14:01.341 "action_on_timeout": "none", 00:14:01.341 "timeout_us": 0, 00:14:01.341 "timeout_admin_us": 0, 00:14:01.341 "keep_alive_timeout_ms": 10000, 00:14:01.341 "arbitration_burst": 0, 00:14:01.341 "low_priority_weight": 0, 00:14:01.341 "medium_priority_weight": 0, 00:14:01.341 "high_priority_weight": 0, 00:14:01.341 "nvme_adminq_poll_period_us": 10000, 00:14:01.341 "nvme_ioq_poll_period_us": 0, 00:14:01.341 "io_queue_requests": 0, 00:14:01.341 "delay_cmd_submit": true, 00:14:01.341 "transport_retry_count": 4, 00:14:01.341 "bdev_retry_count": 3, 00:14:01.341 "transport_ack_timeout": 0, 00:14:01.341 "ctrlr_loss_timeout_sec": 0, 00:14:01.341 "reconnect_delay_sec": 0, 00:14:01.341 "fast_io_fail_timeout_sec": 0, 00:14:01.341 "disable_auto_failback": false, 00:14:01.341 "generate_uuids": false, 00:14:01.341 "transport_tos": 0, 00:14:01.341 "nvme_error_stat": false, 00:14:01.341 "rdma_srq_size": 0, 00:14:01.341 "io_path_stat": false, 00:14:01.341 "allow_accel_sequence": false, 00:14:01.341 "rdma_max_cq_size": 0, 00:14:01.341 "rdma_cm_event_timeout_ms": 0, 00:14:01.341 "dhchap_digests": [ 00:14:01.341 "sha256", 00:14:01.341 "sha384", 00:14:01.341 "sha512" 00:14:01.341 ], 00:14:01.341 "dhchap_dhgroups": [ 00:14:01.341 "null", 00:14:01.341 "ffdhe2048", 00:14:01.341 "ffdhe3072", 00:14:01.341 "ffdhe4096", 00:14:01.341 "ffdhe6144", 00:14:01.341 "ffdhe8192" 00:14:01.341 ] 00:14:01.341 } 00:14:01.341 }, 00:14:01.341 { 00:14:01.341 "method": "bdev_nvme_set_hotplug", 00:14:01.341 "params": { 00:14:01.341 "period_us": 100000, 00:14:01.341 "enable": false 00:14:01.341 } 00:14:01.341 }, 00:14:01.341 { 00:14:01.341 "method": "bdev_malloc_create", 00:14:01.341 "params": { 00:14:01.341 "name": "malloc0", 00:14:01.341 "num_blocks": 8192, 00:14:01.341 "block_size": 4096, 00:14:01.341 "physical_block_size": 4096, 00:14:01.341 "uuid": "bff8fa2d-1722-47ee-b5dd-5cf16b6f3fc3", 00:14:01.341 "optimal_io_boundary": 0, 00:14:01.341 "md_size": 0, 00:14:01.341 "dif_type": 0, 00:14:01.341 "dif_is_head_of_md": false, 00:14:01.341 "dif_pi_format": 0 00:14:01.341 } 00:14:01.341 }, 00:14:01.341 { 00:14:01.341 "method": "bdev_wait_for_examine" 00:14:01.341 } 00:14:01.341 ] 00:14:01.341 }, 00:14:01.341 { 00:14:01.341 "subsystem": "scsi", 00:14:01.341 "config": null 00:14:01.341 }, 00:14:01.341 { 00:14:01.341 "subsystem": "scheduler", 00:14:01.341 "config": [ 00:14:01.341 { 00:14:01.341 "method": "framework_set_scheduler", 00:14:01.341 "params": { 00:14:01.341 "name": "static" 00:14:01.341 } 00:14:01.341 } 00:14:01.341 ] 00:14:01.341 }, 00:14:01.341 { 00:14:01.341 "subsystem": "vhost_scsi", 00:14:01.341 "config": [] 00:14:01.341 }, 00:14:01.341 { 00:14:01.341 "subsystem": "vhost_blk", 00:14:01.341 "config": [] 00:14:01.341 }, 00:14:01.341 { 00:14:01.341 "subsystem": "ublk", 00:14:01.341 "config": [ 00:14:01.341 { 00:14:01.341 "method": "ublk_create_target", 00:14:01.341 "params": { 00:14:01.341 "cpumask": "1" 00:14:01.341 } 00:14:01.341 }, 00:14:01.341 { 00:14:01.341 "method": "ublk_start_disk", 00:14:01.341 "params": { 00:14:01.341 "bdev_name": "malloc0", 00:14:01.341 "ublk_id": 0, 00:14:01.341 "num_queues": 1, 00:14:01.341 "queue_depth": 128 00:14:01.341 } 00:14:01.341 } 00:14:01.341 ] 00:14:01.341 }, 00:14:01.341 { 00:14:01.341 "subsystem": "nbd", 00:14:01.341 "config": [] 00:14:01.341 }, 00:14:01.341 { 00:14:01.341 "subsystem": "nvmf", 00:14:01.341 "config": [ 00:14:01.341 { 00:14:01.341 "method": "nvmf_set_config", 00:14:01.341 "params": { 00:14:01.341 "discovery_filter": "match_any", 00:14:01.341 "admin_cmd_passthru": { 00:14:01.341 "identify_ctrlr": false 00:14:01.341 }, 00:14:01.341 "dhchap_digests": [ 00:14:01.341 "sha256", 00:14:01.341 "sha384", 00:14:01.341 "sha512" 00:14:01.341 ], 00:14:01.341 "dhchap_dhgroups": [ 00:14:01.341 "null", 00:14:01.341 "ffdhe2048", 00:14:01.341 "ffdhe3072", 00:14:01.341 "ffdhe4096", 00:14:01.341 "ffdhe6144", 00:14:01.341 "ffdhe8192" 00:14:01.341 ] 00:14:01.341 } 00:14:01.341 }, 00:14:01.341 { 00:14:01.341 "method": "nvmf_set_max_subsystems", 00:14:01.341 "params": { 00:14:01.341 "max_subsystems": 1024 00:14:01.341 } 00:14:01.341 }, 00:14:01.341 { 00:14:01.341 "method": "nvmf_set_crdt", 00:14:01.341 "params": { 00:14:01.341 "crdt1": 0, 00:14:01.341 "crdt2": 0, 00:14:01.341 "crdt3": 0 00:14:01.341 } 00:14:01.341 } 00:14:01.341 ] 00:14:01.341 }, 00:14:01.341 { 00:14:01.341 "subsystem": "iscsi", 00:14:01.341 "config": [ 00:14:01.341 { 00:14:01.341 "method": "iscsi_set_options", 00:14:01.341 "params": { 00:14:01.341 "node_base": "iqn.2016-06.io.spdk", 00:14:01.341 "max_sessions": 128, 00:14:01.341 "max_connections_per_session": 2, 00:14:01.341 "max_queue_depth": 64, 00:14:01.341 "default_time2wait": 2, 00:14:01.341 "default_time2retain": 20, 00:14:01.341 "first_burst_length": 8192, 00:14:01.341 "immediate_data": true, 00:14:01.341 "allow_duplicated_isid": false, 00:14:01.341 "error_recovery_level": 0, 00:14:01.341 "nop_timeout": 60, 00:14:01.341 "nop_in_interval": 30, 00:14:01.341 "disable_chap": false, 00:14:01.341 "require_chap": false, 00:14:01.341 "mutual_chap": false, 00:14:01.341 "chap_group": 0, 00:14:01.341 "max_large_datain_per_connection": 64, 00:14:01.341 "max_r2t_per_connection": 4, 00:14:01.341 "pdu_pool_size": 36864, 00:14:01.341 "immediate_data_pool_size": 16384, 00:14:01.341 "data_out_pool_size": 2048 00:14:01.341 } 00:14:01.341 } 00:14:01.341 ] 00:14:01.341 } 00:14:01.341 ] 00:14:01.341 }' 00:14:01.341 [2024-11-10 05:15:54.412694] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:01.341 [2024-11-10 05:15:54.412854] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82762 ] 00:14:01.623 [2024-11-10 05:15:54.565247] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:01.623 [2024-11-10 05:15:54.623450] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:01.884 [2024-11-10 05:15:55.002014] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:01.884 [2024-11-10 05:15:55.002376] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:01.884 [2024-11-10 05:15:55.010172] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:14:01.884 [2024-11-10 05:15:55.010266] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:14:01.884 [2024-11-10 05:15:55.010275] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:01.884 [2024-11-10 05:15:55.010284] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:01.884 [2024-11-10 05:15:55.019125] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:01.884 [2024-11-10 05:15:55.019157] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:01.884 [2024-11-10 05:15:55.026028] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:01.884 [2024-11-10 05:15:55.026152] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:01.884 [2024-11-10 05:15:55.042086] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:02.146 05:15:55 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:02.146 05:15:55 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:14:02.146 05:15:55 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:14:02.146 05:15:55 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:14:02.146 05:15:55 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.146 05:15:55 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:02.146 05:15:55 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.146 05:15:55 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:02.146 05:15:55 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:14:02.146 05:15:55 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 82762 00:14:02.146 05:15:55 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 82762 ']' 00:14:02.146 05:15:55 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 82762 00:14:02.146 05:15:55 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:14:02.146 05:15:55 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:02.146 05:15:55 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82762 00:14:02.146 05:15:55 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:02.146 killing process with pid 82762 00:14:02.146 05:15:55 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:02.146 05:15:55 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82762' 00:14:02.146 05:15:55 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 82762 00:14:02.146 05:15:55 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 82762 00:14:02.405 [2024-11-10 05:15:55.610236] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:02.664 [2024-11-10 05:15:55.642122] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:02.664 [2024-11-10 05:15:55.642265] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:02.664 [2024-11-10 05:15:55.650022] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:02.664 [2024-11-10 05:15:55.650083] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:02.664 [2024-11-10 05:15:55.650092] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:02.664 [2024-11-10 05:15:55.650122] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:02.664 [2024-11-10 05:15:55.650261] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:02.922 05:15:55 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:14:02.922 00:14:02.922 real 0m3.826s 00:14:02.922 user 0m2.583s 00:14:02.922 sys 0m1.898s 00:14:02.922 ************************************ 00:14:02.922 END TEST test_save_ublk_config 00:14:02.922 ************************************ 00:14:02.922 05:15:55 ublk.test_save_ublk_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:02.922 05:15:55 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:02.922 05:15:56 ublk -- ublk/ublk.sh@139 -- # spdk_pid=82813 00:14:02.923 05:15:56 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:02.923 05:15:56 ublk -- ublk/ublk.sh@141 -- # waitforlisten 82813 00:14:02.923 05:15:56 ublk -- common/autotest_common.sh@831 -- # '[' -z 82813 ']' 00:14:02.923 05:15:56 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:02.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:02.923 05:15:56 ublk -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:02.923 05:15:56 ublk -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:02.923 05:15:56 ublk -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:02.923 05:15:56 ublk -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:02.923 05:15:56 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:02.923 [2024-11-10 05:15:56.110548] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:02.923 [2024-11-10 05:15:56.110947] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82813 ] 00:14:03.182 [2024-11-10 05:15:56.257550] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:03.182 [2024-11-10 05:15:56.290801] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:03.182 [2024-11-10 05:15:56.290873] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:03.754 05:15:56 ublk -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:03.754 05:15:56 ublk -- common/autotest_common.sh@864 -- # return 0 00:14:03.754 05:15:56 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:14:03.754 05:15:56 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:03.754 05:15:56 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:03.754 05:15:56 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:03.754 ************************************ 00:14:03.754 START TEST test_create_ublk 00:14:03.754 ************************************ 00:14:03.754 05:15:56 ublk.test_create_ublk -- common/autotest_common.sh@1125 -- # test_create_ublk 00:14:03.754 05:15:56 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:14:03.754 05:15:56 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.754 05:15:56 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:03.754 [2024-11-10 05:15:56.978014] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:03.754 [2024-11-10 05:15:56.979735] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:03.754 05:15:56 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.754 05:15:56 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:14:03.754 05:15:56 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:14:03.754 05:15:56 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.754 05:15:56 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:04.014 05:15:57 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.014 05:15:57 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:14:04.014 05:15:57 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:14:04.014 05:15:57 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.014 05:15:57 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:04.014 [2024-11-10 05:15:57.070182] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:14:04.015 [2024-11-10 05:15:57.070652] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:14:04.015 [2024-11-10 05:15:57.070671] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:04.015 [2024-11-10 05:15:57.070681] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:04.015 [2024-11-10 05:15:57.078051] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:04.015 [2024-11-10 05:15:57.078094] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:04.015 [2024-11-10 05:15:57.086019] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:04.015 [2024-11-10 05:15:57.086748] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:04.015 [2024-11-10 05:15:57.109050] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:04.015 05:15:57 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.015 05:15:57 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:14:04.015 05:15:57 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:14:04.015 05:15:57 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:14:04.015 05:15:57 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.015 05:15:57 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:04.015 05:15:57 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.015 05:15:57 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:14:04.015 { 00:14:04.015 "ublk_device": "/dev/ublkb0", 00:14:04.015 "id": 0, 00:14:04.015 "queue_depth": 512, 00:14:04.015 "num_queues": 4, 00:14:04.015 "bdev_name": "Malloc0" 00:14:04.015 } 00:14:04.015 ]' 00:14:04.015 05:15:57 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:14:04.015 05:15:57 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:04.015 05:15:57 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:14:04.015 05:15:57 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:14:04.015 05:15:57 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:14:04.015 05:15:57 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:14:04.015 05:15:57 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:14:04.276 05:15:57 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:14:04.276 05:15:57 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:14:04.276 05:15:57 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:14:04.276 05:15:57 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:14:04.276 05:15:57 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:14:04.276 05:15:57 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:14:04.276 05:15:57 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:14:04.276 05:15:57 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:14:04.276 05:15:57 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:14:04.276 05:15:57 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:14:04.276 05:15:57 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:14:04.276 05:15:57 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:14:04.276 05:15:57 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:04.276 05:15:57 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:04.276 05:15:57 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:14:04.276 fio: verification read phase will never start because write phase uses all of runtime 00:14:04.276 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:14:04.276 fio-3.35 00:14:04.276 Starting 1 process 00:14:16.492 00:14:16.492 fio_test: (groupid=0, jobs=1): err= 0: pid=82857: Sun Nov 10 05:16:07 2024 00:14:16.492 write: IOPS=20.2k, BW=78.7MiB/s (82.6MB/s)(788MiB/10001msec); 0 zone resets 00:14:16.492 clat (usec): min=33, max=3892, avg=48.75, stdev=79.33 00:14:16.492 lat (usec): min=34, max=3892, avg=49.24, stdev=79.36 00:14:16.492 clat percentiles (usec): 00:14:16.492 | 1.00th=[ 39], 5.00th=[ 41], 10.00th=[ 41], 20.00th=[ 42], 00:14:16.492 | 30.00th=[ 43], 40.00th=[ 44], 50.00th=[ 44], 60.00th=[ 45], 00:14:16.492 | 70.00th=[ 47], 80.00th=[ 49], 90.00th=[ 55], 95.00th=[ 62], 00:14:16.492 | 99.00th=[ 74], 99.50th=[ 84], 99.90th=[ 1172], 99.95th=[ 2376], 00:14:16.492 | 99.99th=[ 3425] 00:14:16.492 bw ( KiB/s): min=62616, max=83672, per=99.90%, avg=80550.32, stdev=4706.95, samples=19 00:14:16.492 iops : min=15654, max=20918, avg=20137.58, stdev=1176.74, samples=19 00:14:16.492 lat (usec) : 50=84.27%, 100=15.43%, 250=0.15%, 500=0.04%, 750=0.01% 00:14:16.492 lat (usec) : 1000=0.01% 00:14:16.492 lat (msec) : 2=0.04%, 4=0.06% 00:14:16.492 cpu : usr=3.63%, sys=18.22%, ctx=201596, majf=0, minf=794 00:14:16.492 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:16.492 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:16.492 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:16.492 issued rwts: total=0,201603,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:16.492 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:16.492 00:14:16.492 Run status group 0 (all jobs): 00:14:16.492 WRITE: bw=78.7MiB/s (82.6MB/s), 78.7MiB/s-78.7MiB/s (82.6MB/s-82.6MB/s), io=788MiB (826MB), run=10001-10001msec 00:14:16.492 00:14:16.492 Disk stats (read/write): 00:14:16.492 ublkb0: ios=0/199420, merge=0/0, ticks=0/7815, in_queue=7816, util=99.07% 00:14:16.492 05:16:07 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:14:16.492 05:16:07 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.492 05:16:07 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:16.492 [2024-11-10 05:16:07.549784] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:16.492 [2024-11-10 05:16:07.592424] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:16.492 [2024-11-10 05:16:07.593326] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:16.492 [2024-11-10 05:16:07.603018] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:16.492 [2024-11-10 05:16:07.603245] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:16.492 [2024-11-10 05:16:07.603251] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:16.492 05:16:07 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.492 05:16:07 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:14:16.492 05:16:07 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # local es=0 00:14:16.492 05:16:07 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:14:16.492 05:16:07 ublk.test_create_ublk -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:16.492 05:16:07 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:16.492 05:16:07 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:16.492 05:16:07 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:16.492 05:16:07 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # rpc_cmd ublk_stop_disk 0 00:14:16.492 05:16:07 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.492 05:16:07 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:16.492 [2024-11-10 05:16:07.619084] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:14:16.492 request: 00:14:16.492 { 00:14:16.492 "ublk_id": 0, 00:14:16.492 "method": "ublk_stop_disk", 00:14:16.492 "req_id": 1 00:14:16.492 } 00:14:16.492 Got JSON-RPC error response 00:14:16.492 response: 00:14:16.492 { 00:14:16.492 "code": -19, 00:14:16.492 "message": "No such device" 00:14:16.492 } 00:14:16.492 05:16:07 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:16.492 05:16:07 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # es=1 00:14:16.492 05:16:07 ublk.test_create_ublk -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:16.492 05:16:07 ublk.test_create_ublk -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:16.492 05:16:07 ublk.test_create_ublk -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:16.492 05:16:07 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:14:16.492 05:16:07 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.492 05:16:07 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:16.492 [2024-11-10 05:16:07.635064] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:16.492 [2024-11-10 05:16:07.636499] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:16.492 [2024-11-10 05:16:07.636528] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:16.492 05:16:07 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.492 05:16:07 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:16.492 05:16:07 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.492 05:16:07 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:16.492 05:16:07 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.492 05:16:07 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:14:16.492 05:16:07 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:16.492 05:16:07 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.492 05:16:07 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:16.492 05:16:07 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.492 05:16:07 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:16.492 05:16:07 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:14:16.492 05:16:07 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:16.492 05:16:07 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:16.492 05:16:07 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.492 05:16:07 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:16.492 05:16:07 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.492 05:16:07 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:16.492 05:16:07 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:14:16.492 ************************************ 00:14:16.492 END TEST test_create_ublk 00:14:16.492 ************************************ 00:14:16.492 05:16:07 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:16.492 00:14:16.492 real 0m10.823s 00:14:16.492 user 0m0.676s 00:14:16.492 sys 0m1.905s 00:14:16.492 05:16:07 ublk.test_create_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:16.492 05:16:07 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:16.492 05:16:07 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:14:16.492 05:16:07 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:16.492 05:16:07 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:16.492 05:16:07 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:16.492 ************************************ 00:14:16.492 START TEST test_create_multi_ublk 00:14:16.492 ************************************ 00:14:16.492 05:16:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@1125 -- # test_create_multi_ublk 00:14:16.492 05:16:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:14:16.492 05:16:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.492 05:16:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:16.492 [2024-11-10 05:16:07.846002] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:16.493 [2024-11-10 05:16:07.846871] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:16.493 05:16:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.493 05:16:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:14:16.493 05:16:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:14:16.493 05:16:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:16.493 05:16:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:14:16.493 05:16:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.493 05:16:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:16.493 05:16:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.493 05:16:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:14:16.493 05:16:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:14:16.493 05:16:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.493 05:16:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:16.493 [2024-11-10 05:16:07.918117] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:14:16.493 [2024-11-10 05:16:07.918414] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:14:16.493 [2024-11-10 05:16:07.918427] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:16.493 [2024-11-10 05:16:07.918432] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:16.493 [2024-11-10 05:16:07.930040] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:16.493 [2024-11-10 05:16:07.930059] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:16.493 [2024-11-10 05:16:07.942010] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:16.493 [2024-11-10 05:16:07.942477] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:16.493 [2024-11-10 05:16:07.956225] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:16.493 05:16:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.493 05:16:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:14:16.493 05:16:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:16.493 05:16:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:14:16.493 05:16:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.493 05:16:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:16.493 05:16:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.493 05:16:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:14:16.493 05:16:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:14:16.493 05:16:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.493 05:16:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:16.493 [2024-11-10 05:16:08.039111] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:14:16.493 [2024-11-10 05:16:08.039410] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:14:16.493 [2024-11-10 05:16:08.039422] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:16.493 [2024-11-10 05:16:08.039428] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:14:16.493 [2024-11-10 05:16:08.051038] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:16.493 [2024-11-10 05:16:08.051059] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:16.493 [2024-11-10 05:16:08.063009] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:16.493 [2024-11-10 05:16:08.063486] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:14:16.493 [2024-11-10 05:16:08.070032] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:14:16.493 05:16:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.493 05:16:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:14:16.493 05:16:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:16.493 05:16:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:14:16.493 05:16:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.493 05:16:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:16.493 05:16:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.493 05:16:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:14:16.493 05:16:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:14:16.493 05:16:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.493 05:16:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:16.493 [2024-11-10 05:16:08.147103] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:14:16.493 [2024-11-10 05:16:08.147397] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:14:16.493 [2024-11-10 05:16:08.147409] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:14:16.493 [2024-11-10 05:16:08.147414] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:14:16.493 [2024-11-10 05:16:08.159039] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:16.493 [2024-11-10 05:16:08.159056] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:16.493 [2024-11-10 05:16:08.171022] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:16.493 [2024-11-10 05:16:08.171494] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:14:16.493 [2024-11-10 05:16:08.188014] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:14:16.493 05:16:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.493 05:16:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:14:16.493 05:16:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:16.493 05:16:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:14:16.493 05:16:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.493 05:16:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:16.493 05:16:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.493 05:16:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:14:16.493 05:16:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:14:16.493 05:16:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.493 05:16:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:16.493 [2024-11-10 05:16:08.271098] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:14:16.493 [2024-11-10 05:16:08.271390] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:14:16.493 [2024-11-10 05:16:08.271402] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:14:16.493 [2024-11-10 05:16:08.271408] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:14:16.493 [2024-11-10 05:16:08.283016] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:16.493 [2024-11-10 05:16:08.283039] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:16.493 [2024-11-10 05:16:08.295022] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:16.493 [2024-11-10 05:16:08.295491] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:14:16.493 [2024-11-10 05:16:08.308037] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:14:16.493 05:16:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.493 05:16:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:14:16.493 05:16:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:14:16.493 05:16:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.493 05:16:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:16.493 05:16:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.493 05:16:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:14:16.493 { 00:14:16.493 "ublk_device": "/dev/ublkb0", 00:14:16.493 "id": 0, 00:14:16.493 "queue_depth": 512, 00:14:16.493 "num_queues": 4, 00:14:16.493 "bdev_name": "Malloc0" 00:14:16.493 }, 00:14:16.493 { 00:14:16.493 "ublk_device": "/dev/ublkb1", 00:14:16.493 "id": 1, 00:14:16.493 "queue_depth": 512, 00:14:16.493 "num_queues": 4, 00:14:16.493 "bdev_name": "Malloc1" 00:14:16.493 }, 00:14:16.493 { 00:14:16.493 "ublk_device": "/dev/ublkb2", 00:14:16.493 "id": 2, 00:14:16.493 "queue_depth": 512, 00:14:16.493 "num_queues": 4, 00:14:16.493 "bdev_name": "Malloc2" 00:14:16.493 }, 00:14:16.493 { 00:14:16.493 "ublk_device": "/dev/ublkb3", 00:14:16.493 "id": 3, 00:14:16.493 "queue_depth": 512, 00:14:16.493 "num_queues": 4, 00:14:16.493 "bdev_name": "Malloc3" 00:14:16.493 } 00:14:16.493 ]' 00:14:16.493 05:16:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:14:16.493 05:16:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:16.493 05:16:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:14:16.493 05:16:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:16.493 05:16:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:14:16.493 05:16:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:14:16.493 05:16:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:14:16.493 05:16:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:16.493 05:16:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:14:16.493 05:16:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:16.493 05:16:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:14:16.493 05:16:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:14:16.493 05:16:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:16.493 05:16:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:14:16.493 05:16:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:14:16.493 05:16:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:14:16.493 05:16:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:14:16.493 05:16:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:14:16.494 05:16:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:16.494 05:16:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:14:16.494 05:16:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:16.494 05:16:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:14:16.494 05:16:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:14:16.494 05:16:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:16.494 05:16:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:14:16.494 05:16:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:14:16.494 05:16:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:14:16.494 05:16:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:14:16.494 05:16:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:14:16.494 05:16:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:16.494 05:16:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:14:16.494 05:16:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:16.494 05:16:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:14:16.494 05:16:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:14:16.494 05:16:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:16.494 05:16:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:14:16.494 05:16:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:14:16.494 05:16:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:14:16.494 05:16:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:14:16.494 05:16:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:14:16.494 05:16:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:16.494 05:16:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:14:16.494 05:16:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:16.494 05:16:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:14:16.494 05:16:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:14:16.494 05:16:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:14:16.494 05:16:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:14:16.494 05:16:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:16.494 05:16:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:14:16.494 05:16:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.494 05:16:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:16.494 [2024-11-10 05:16:08.964079] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:16.494 [2024-11-10 05:16:08.994447] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:16.494 [2024-11-10 05:16:08.995419] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:16.494 [2024-11-10 05:16:09.004011] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:16.494 [2024-11-10 05:16:09.004238] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:16.494 [2024-11-10 05:16:09.004250] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:16.494 05:16:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.494 05:16:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:16.494 05:16:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:14:16.494 05:16:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.494 05:16:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:16.494 [2024-11-10 05:16:09.020080] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:14:16.494 [2024-11-10 05:16:09.052033] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:16.494 [2024-11-10 05:16:09.052677] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:14:16.494 [2024-11-10 05:16:09.057062] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:16.494 [2024-11-10 05:16:09.057295] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:14:16.494 [2024-11-10 05:16:09.057306] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:14:16.494 05:16:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.494 05:16:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:16.494 05:16:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:14:16.494 05:16:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.494 05:16:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:16.494 [2024-11-10 05:16:09.076103] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:14:16.494 [2024-11-10 05:16:09.110035] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:16.494 [2024-11-10 05:16:09.110637] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:14:16.494 [2024-11-10 05:16:09.119015] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:16.494 [2024-11-10 05:16:09.119257] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:14:16.494 [2024-11-10 05:16:09.119268] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:14:16.494 05:16:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.494 05:16:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:16.494 05:16:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:14:16.494 05:16:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.494 05:16:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:16.494 [2024-11-10 05:16:09.127084] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:14:16.494 [2024-11-10 05:16:09.164035] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:16.494 [2024-11-10 05:16:09.164591] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:14:16.494 [2024-11-10 05:16:09.171003] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:16.494 [2024-11-10 05:16:09.171243] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:14:16.494 [2024-11-10 05:16:09.171254] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:14:16.494 05:16:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.494 05:16:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:14:16.494 [2024-11-10 05:16:09.355088] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:16.494 [2024-11-10 05:16:09.356287] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:16.494 [2024-11-10 05:16:09.356318] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:16.494 05:16:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:14:16.494 05:16:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:16.494 05:16:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:16.494 05:16:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.494 05:16:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:16.494 05:16:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.494 05:16:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:16.494 05:16:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:14:16.494 05:16:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.494 05:16:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:16.494 05:16:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.494 05:16:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:16.494 05:16:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:14:16.494 05:16:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.494 05:16:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:16.494 05:16:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.494 05:16:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:16.494 05:16:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:14:16.494 05:16:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.494 05:16:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:16.494 05:16:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.494 05:16:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:14:16.494 05:16:09 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:16.494 05:16:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.494 05:16:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:16.494 05:16:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.494 05:16:09 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:16.494 05:16:09 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:14:16.494 05:16:09 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:16.494 05:16:09 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:16.494 05:16:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.494 05:16:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:16.494 05:16:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.494 05:16:09 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:16.494 05:16:09 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:14:16.494 ************************************ 00:14:16.494 END TEST test_create_multi_ublk 00:14:16.494 ************************************ 00:14:16.494 05:16:09 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:16.494 00:14:16.494 real 0m1.875s 00:14:16.494 user 0m0.805s 00:14:16.494 sys 0m0.132s 00:14:16.494 05:16:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:16.494 05:16:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:16.752 05:16:09 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:14:16.752 05:16:09 ublk -- ublk/ublk.sh@147 -- # cleanup 00:14:16.752 05:16:09 ublk -- ublk/ublk.sh@130 -- # killprocess 82813 00:14:16.752 05:16:09 ublk -- common/autotest_common.sh@950 -- # '[' -z 82813 ']' 00:14:16.752 05:16:09 ublk -- common/autotest_common.sh@954 -- # kill -0 82813 00:14:16.752 05:16:09 ublk -- common/autotest_common.sh@955 -- # uname 00:14:16.752 05:16:09 ublk -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:16.752 05:16:09 ublk -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82813 00:14:16.752 killing process with pid 82813 00:14:16.752 05:16:09 ublk -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:16.752 05:16:09 ublk -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:16.752 05:16:09 ublk -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82813' 00:14:16.752 05:16:09 ublk -- common/autotest_common.sh@969 -- # kill 82813 00:14:16.752 05:16:09 ublk -- common/autotest_common.sh@974 -- # wait 82813 00:14:16.752 [2024-11-10 05:16:09.915187] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:16.752 [2024-11-10 05:16:09.915234] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:17.010 00:14:17.010 real 0m18.225s 00:14:17.010 user 0m28.004s 00:14:17.010 sys 0m8.390s 00:14:17.010 05:16:10 ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:17.010 05:16:10 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:17.010 ************************************ 00:14:17.010 END TEST ublk 00:14:17.010 ************************************ 00:14:17.010 05:16:10 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:14:17.010 05:16:10 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:17.010 05:16:10 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:17.010 05:16:10 -- common/autotest_common.sh@10 -- # set +x 00:14:17.268 ************************************ 00:14:17.268 START TEST ublk_recovery 00:14:17.268 ************************************ 00:14:17.268 05:16:10 ublk_recovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:14:17.268 * Looking for test storage... 00:14:17.268 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:14:17.268 05:16:10 ublk_recovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:17.268 05:16:10 ublk_recovery -- common/autotest_common.sh@1681 -- # lcov --version 00:14:17.268 05:16:10 ublk_recovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:17.268 05:16:10 ublk_recovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:17.268 05:16:10 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:17.268 05:16:10 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:17.268 05:16:10 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:17.268 05:16:10 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:14:17.268 05:16:10 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:14:17.268 05:16:10 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:14:17.268 05:16:10 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:14:17.268 05:16:10 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:14:17.268 05:16:10 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:14:17.268 05:16:10 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:14:17.268 05:16:10 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:17.268 05:16:10 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:14:17.268 05:16:10 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:14:17.268 05:16:10 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:17.268 05:16:10 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:17.268 05:16:10 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:14:17.268 05:16:10 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:14:17.268 05:16:10 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:17.268 05:16:10 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:14:17.268 05:16:10 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:14:17.268 05:16:10 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:14:17.268 05:16:10 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:14:17.268 05:16:10 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:17.268 05:16:10 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:14:17.268 05:16:10 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:14:17.268 05:16:10 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:17.268 05:16:10 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:17.268 05:16:10 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:14:17.268 05:16:10 ublk_recovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:17.268 05:16:10 ublk_recovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:17.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:17.268 --rc genhtml_branch_coverage=1 00:14:17.268 --rc genhtml_function_coverage=1 00:14:17.268 --rc genhtml_legend=1 00:14:17.268 --rc geninfo_all_blocks=1 00:14:17.268 --rc geninfo_unexecuted_blocks=1 00:14:17.268 00:14:17.268 ' 00:14:17.268 05:16:10 ublk_recovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:17.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:17.268 --rc genhtml_branch_coverage=1 00:14:17.268 --rc genhtml_function_coverage=1 00:14:17.268 --rc genhtml_legend=1 00:14:17.268 --rc geninfo_all_blocks=1 00:14:17.268 --rc geninfo_unexecuted_blocks=1 00:14:17.268 00:14:17.268 ' 00:14:17.268 05:16:10 ublk_recovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:17.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:17.268 --rc genhtml_branch_coverage=1 00:14:17.268 --rc genhtml_function_coverage=1 00:14:17.268 --rc genhtml_legend=1 00:14:17.268 --rc geninfo_all_blocks=1 00:14:17.268 --rc geninfo_unexecuted_blocks=1 00:14:17.268 00:14:17.268 ' 00:14:17.268 05:16:10 ublk_recovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:17.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:17.268 --rc genhtml_branch_coverage=1 00:14:17.268 --rc genhtml_function_coverage=1 00:14:17.268 --rc genhtml_legend=1 00:14:17.268 --rc geninfo_all_blocks=1 00:14:17.268 --rc geninfo_unexecuted_blocks=1 00:14:17.268 00:14:17.268 ' 00:14:17.268 05:16:10 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:14:17.268 05:16:10 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:14:17.268 05:16:10 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:14:17.268 05:16:10 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:14:17.268 05:16:10 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:14:17.268 05:16:10 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:14:17.268 05:16:10 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:14:17.268 05:16:10 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:14:17.268 05:16:10 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:14:17.268 05:16:10 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:14:17.268 05:16:10 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=83180 00:14:17.268 05:16:10 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:17.268 05:16:10 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:17.268 05:16:10 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 83180 00:14:17.268 05:16:10 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 83180 ']' 00:14:17.268 05:16:10 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:17.268 05:16:10 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:17.268 05:16:10 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:17.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:17.268 05:16:10 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:17.268 05:16:10 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:17.268 [2024-11-10 05:16:10.439399] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:17.268 [2024-11-10 05:16:10.439650] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83180 ] 00:14:17.525 [2024-11-10 05:16:10.579998] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:17.525 [2024-11-10 05:16:10.609426] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:17.525 [2024-11-10 05:16:10.609508] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:18.091 05:16:11 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:18.091 05:16:11 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:14:18.091 05:16:11 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:14:18.091 05:16:11 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.091 05:16:11 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:18.091 [2024-11-10 05:16:11.285011] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:18.091 [2024-11-10 05:16:11.285973] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:18.091 05:16:11 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.091 05:16:11 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:14:18.091 05:16:11 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.091 05:16:11 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:18.091 malloc0 00:14:18.091 05:16:11 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.091 05:16:11 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:14:18.091 05:16:11 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.091 05:16:11 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:18.091 [2024-11-10 05:16:11.317096] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:14:18.091 [2024-11-10 05:16:11.317178] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:14:18.092 [2024-11-10 05:16:11.317188] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:18.092 [2024-11-10 05:16:11.317200] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:14:18.349 [2024-11-10 05:16:11.326077] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:18.349 [2024-11-10 05:16:11.326099] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:18.349 [2024-11-10 05:16:11.333012] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:18.349 [2024-11-10 05:16:11.333124] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:14:18.349 [2024-11-10 05:16:11.355011] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:14:18.349 1 00:14:18.349 05:16:11 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.349 05:16:11 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:14:19.283 05:16:12 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=83213 00:14:19.283 05:16:12 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:14:19.283 05:16:12 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:14:19.283 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:19.283 fio-3.35 00:14:19.283 Starting 1 process 00:14:24.547 05:16:17 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 83180 00:14:24.547 05:16:17 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:14:29.811 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 83180 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:14:29.811 05:16:22 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=83323 00:14:29.811 05:16:22 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:29.811 05:16:22 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:29.811 05:16:22 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 83323 00:14:29.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:29.811 05:16:22 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 83323 ']' 00:14:29.811 05:16:22 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:29.811 05:16:22 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:29.811 05:16:22 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:29.811 05:16:22 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:29.811 05:16:22 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:29.811 [2024-11-10 05:16:22.448161] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:29.811 [2024-11-10 05:16:22.448442] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83323 ] 00:14:29.811 [2024-11-10 05:16:22.594921] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:29.811 [2024-11-10 05:16:22.627098] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:29.811 [2024-11-10 05:16:22.627147] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:30.069 05:16:23 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:30.069 05:16:23 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:14:30.069 05:16:23 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:14:30.069 05:16:23 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.069 05:16:23 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:30.069 [2024-11-10 05:16:23.246008] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:30.069 [2024-11-10 05:16:23.247181] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:30.069 05:16:23 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.069 05:16:23 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:14:30.069 05:16:23 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.069 05:16:23 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:30.069 malloc0 00:14:30.070 05:16:23 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.070 05:16:23 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:14:30.070 05:16:23 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.070 05:16:23 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:30.070 [2024-11-10 05:16:23.278142] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:14:30.070 [2024-11-10 05:16:23.278188] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:30.070 [2024-11-10 05:16:23.278196] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:30.070 [2024-11-10 05:16:23.286050] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:30.070 [2024-11-10 05:16:23.286070] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:30.070 1 00:14:30.070 05:16:23 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.070 05:16:23 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 83213 00:14:31.475 [2024-11-10 05:16:24.286113] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:31.475 [2024-11-10 05:16:24.294011] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:31.475 [2024-11-10 05:16:24.294040] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:32.410 [2024-11-10 05:16:25.294066] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:32.410 [2024-11-10 05:16:25.298024] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:32.410 [2024-11-10 05:16:25.298036] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:33.343 [2024-11-10 05:16:26.298063] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:33.343 [2024-11-10 05:16:26.302033] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:33.343 [2024-11-10 05:16:26.302057] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:33.343 [2024-11-10 05:16:26.302064] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:14:33.343 [2024-11-10 05:16:26.302149] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:14:55.264 [2024-11-10 05:16:47.544013] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:14:55.264 [2024-11-10 05:16:47.547311] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:14:55.264 [2024-11-10 05:16:47.557184] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:14:55.264 [2024-11-10 05:16:47.557214] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:15:21.853 00:15:21.853 fio_test: (groupid=0, jobs=1): err= 0: pid=83216: Sun Nov 10 05:17:12 2024 00:15:21.853 read: IOPS=15.8k, BW=61.9MiB/s (64.9MB/s)(3713MiB/60002msec) 00:15:21.853 slat (nsec): min=916, max=680200, avg=4816.27, stdev=1799.64 00:15:21.853 clat (usec): min=686, max=30198k, avg=4125.51, stdev=255372.59 00:15:21.853 lat (usec): min=690, max=30198k, avg=4130.32, stdev=255372.59 00:15:21.853 clat percentiles (usec): 00:15:21.853 | 1.00th=[ 1582], 5.00th=[ 1696], 10.00th=[ 1729], 20.00th=[ 1745], 00:15:21.853 | 30.00th=[ 1762], 40.00th=[ 1778], 50.00th=[ 1795], 60.00th=[ 1811], 00:15:21.853 | 70.00th=[ 1827], 80.00th=[ 1860], 90.00th=[ 2376], 95.00th=[ 2900], 00:15:21.853 | 99.00th=[ 4883], 99.50th=[ 5342], 99.90th=[ 7046], 99.95th=[ 8029], 00:15:21.853 | 99.99th=[12780] 00:15:21.853 bw ( KiB/s): min= 5104, max=135968, per=100.00%, avg=124820.10, stdev=22159.80, samples=60 00:15:21.853 iops : min= 1276, max=33992, avg=31205.05, stdev=5539.96, samples=60 00:15:21.853 write: IOPS=15.8k, BW=61.8MiB/s (64.8MB/s)(3709MiB/60002msec); 0 zone resets 00:15:21.853 slat (nsec): min=951, max=421918, avg=4846.95, stdev=1495.80 00:15:21.853 clat (usec): min=512, max=30198k, avg=3948.04, stdev=240027.66 00:15:21.853 lat (usec): min=520, max=30198k, avg=3952.89, stdev=240027.66 00:15:21.853 clat percentiles (usec): 00:15:21.853 | 1.00th=[ 1631], 5.00th=[ 1778], 10.00th=[ 1811], 20.00th=[ 1844], 00:15:21.853 | 30.00th=[ 1860], 40.00th=[ 1876], 50.00th=[ 1876], 60.00th=[ 1893], 00:15:21.853 | 70.00th=[ 1909], 80.00th=[ 1958], 90.00th=[ 2442], 95.00th=[ 2835], 00:15:21.853 | 99.00th=[ 4883], 99.50th=[ 5407], 99.90th=[ 7046], 99.95th=[ 8029], 00:15:21.853 | 99.99th=[12780] 00:15:21.853 bw ( KiB/s): min= 5112, max=135312, per=100.00%, avg=124640.27, stdev=22195.41, samples=60 00:15:21.853 iops : min= 1278, max=33828, avg=31160.07, stdev=5548.85, samples=60 00:15:21.853 lat (usec) : 750=0.01%, 1000=0.01% 00:15:21.853 lat (msec) : 2=83.84%, 4=13.70%, 10=2.42%, 20=0.02%, >=2000=0.01% 00:15:21.853 cpu : usr=3.35%, sys=15.57%, ctx=63835, majf=0, minf=13 00:15:21.853 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:15:21.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:21.853 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:21.853 issued rwts: total=950581,949430,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:21.853 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:21.853 00:15:21.853 Run status group 0 (all jobs): 00:15:21.853 READ: bw=61.9MiB/s (64.9MB/s), 61.9MiB/s-61.9MiB/s (64.9MB/s-64.9MB/s), io=3713MiB (3894MB), run=60002-60002msec 00:15:21.853 WRITE: bw=61.8MiB/s (64.8MB/s), 61.8MiB/s-61.8MiB/s (64.8MB/s-64.8MB/s), io=3709MiB (3889MB), run=60002-60002msec 00:15:21.853 00:15:21.853 Disk stats (read/write): 00:15:21.854 ublkb1: ios=947042/945841, merge=0/0, ticks=3868659/3620356, in_queue=7489016, util=99.93% 00:15:21.854 05:17:12 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:15:21.854 05:17:12 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.854 05:17:12 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:21.854 [2024-11-10 05:17:12.617172] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:15:21.854 [2024-11-10 05:17:12.667026] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:21.854 [2024-11-10 05:17:12.667245] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:15:21.854 [2024-11-10 05:17:12.674014] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:21.854 [2024-11-10 05:17:12.674172] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:15:21.854 [2024-11-10 05:17:12.674297] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:15:21.854 05:17:12 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.854 05:17:12 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:15:21.854 05:17:12 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.854 05:17:12 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:21.854 [2024-11-10 05:17:12.690065] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:21.854 [2024-11-10 05:17:12.690979] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:21.854 [2024-11-10 05:17:12.691016] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:15:21.854 05:17:12 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.854 05:17:12 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:15:21.854 05:17:12 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:15:21.854 05:17:12 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 83323 00:15:21.854 05:17:12 ublk_recovery -- common/autotest_common.sh@950 -- # '[' -z 83323 ']' 00:15:21.854 05:17:12 ublk_recovery -- common/autotest_common.sh@954 -- # kill -0 83323 00:15:21.854 05:17:12 ublk_recovery -- common/autotest_common.sh@955 -- # uname 00:15:21.854 05:17:12 ublk_recovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:21.854 05:17:12 ublk_recovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83323 00:15:21.854 killing process with pid 83323 00:15:21.854 05:17:12 ublk_recovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:21.854 05:17:12 ublk_recovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:21.854 05:17:12 ublk_recovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83323' 00:15:21.854 05:17:12 ublk_recovery -- common/autotest_common.sh@969 -- # kill 83323 00:15:21.854 05:17:12 ublk_recovery -- common/autotest_common.sh@974 -- # wait 83323 00:15:21.854 [2024-11-10 05:17:12.895312] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:21.854 [2024-11-10 05:17:12.895359] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:21.854 ************************************ 00:15:21.854 END TEST ublk_recovery 00:15:21.854 ************************************ 00:15:21.854 00:15:21.854 real 1m2.951s 00:15:21.854 user 1m46.878s 00:15:21.854 sys 0m19.928s 00:15:21.854 05:17:13 ublk_recovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:21.854 05:17:13 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:21.854 05:17:13 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:15:21.854 05:17:13 -- spdk/autotest.sh@256 -- # timing_exit lib 00:15:21.854 05:17:13 -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:21.854 05:17:13 -- common/autotest_common.sh@10 -- # set +x 00:15:21.854 05:17:13 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:15:21.854 05:17:13 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:15:21.854 05:17:13 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:15:21.854 05:17:13 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:15:21.854 05:17:13 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:15:21.854 05:17:13 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:15:21.854 05:17:13 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:15:21.854 05:17:13 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:15:21.854 05:17:13 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:15:21.854 05:17:13 -- spdk/autotest.sh@338 -- # '[' 1 -eq 1 ']' 00:15:21.854 05:17:13 -- spdk/autotest.sh@339 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:21.854 05:17:13 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:21.854 05:17:13 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:21.854 05:17:13 -- common/autotest_common.sh@10 -- # set +x 00:15:21.854 ************************************ 00:15:21.854 START TEST ftl 00:15:21.854 ************************************ 00:15:21.854 05:17:13 ftl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:21.854 * Looking for test storage... 00:15:21.854 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:21.854 05:17:13 ftl -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:21.854 05:17:13 ftl -- common/autotest_common.sh@1681 -- # lcov --version 00:15:21.854 05:17:13 ftl -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:21.854 05:17:13 ftl -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:21.854 05:17:13 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:21.854 05:17:13 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:21.854 05:17:13 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:21.854 05:17:13 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:15:21.854 05:17:13 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:15:21.854 05:17:13 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:15:21.854 05:17:13 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:15:21.854 05:17:13 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:15:21.854 05:17:13 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:15:21.854 05:17:13 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:15:21.854 05:17:13 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:21.854 05:17:13 ftl -- scripts/common.sh@344 -- # case "$op" in 00:15:21.854 05:17:13 ftl -- scripts/common.sh@345 -- # : 1 00:15:21.854 05:17:13 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:21.854 05:17:13 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:21.854 05:17:13 ftl -- scripts/common.sh@365 -- # decimal 1 00:15:21.854 05:17:13 ftl -- scripts/common.sh@353 -- # local d=1 00:15:21.854 05:17:13 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:21.854 05:17:13 ftl -- scripts/common.sh@355 -- # echo 1 00:15:21.854 05:17:13 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:15:21.854 05:17:13 ftl -- scripts/common.sh@366 -- # decimal 2 00:15:21.854 05:17:13 ftl -- scripts/common.sh@353 -- # local d=2 00:15:21.854 05:17:13 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:21.854 05:17:13 ftl -- scripts/common.sh@355 -- # echo 2 00:15:21.854 05:17:13 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:15:21.854 05:17:13 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:21.854 05:17:13 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:21.854 05:17:13 ftl -- scripts/common.sh@368 -- # return 0 00:15:21.854 05:17:13 ftl -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:21.854 05:17:13 ftl -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:21.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:21.854 --rc genhtml_branch_coverage=1 00:15:21.854 --rc genhtml_function_coverage=1 00:15:21.854 --rc genhtml_legend=1 00:15:21.854 --rc geninfo_all_blocks=1 00:15:21.854 --rc geninfo_unexecuted_blocks=1 00:15:21.854 00:15:21.854 ' 00:15:21.854 05:17:13 ftl -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:21.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:21.854 --rc genhtml_branch_coverage=1 00:15:21.854 --rc genhtml_function_coverage=1 00:15:21.854 --rc genhtml_legend=1 00:15:21.854 --rc geninfo_all_blocks=1 00:15:21.854 --rc geninfo_unexecuted_blocks=1 00:15:21.854 00:15:21.854 ' 00:15:21.854 05:17:13 ftl -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:21.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:21.854 --rc genhtml_branch_coverage=1 00:15:21.854 --rc genhtml_function_coverage=1 00:15:21.854 --rc genhtml_legend=1 00:15:21.854 --rc geninfo_all_blocks=1 00:15:21.854 --rc geninfo_unexecuted_blocks=1 00:15:21.854 00:15:21.854 ' 00:15:21.854 05:17:13 ftl -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:21.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:21.854 --rc genhtml_branch_coverage=1 00:15:21.854 --rc genhtml_function_coverage=1 00:15:21.854 --rc genhtml_legend=1 00:15:21.854 --rc geninfo_all_blocks=1 00:15:21.854 --rc geninfo_unexecuted_blocks=1 00:15:21.854 00:15:21.854 ' 00:15:21.854 05:17:13 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:21.854 05:17:13 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:21.854 05:17:13 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:21.854 05:17:13 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:21.854 05:17:13 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:21.854 05:17:13 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:21.854 05:17:13 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:21.854 05:17:13 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:21.854 05:17:13 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:21.854 05:17:13 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:21.854 05:17:13 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:21.854 05:17:13 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:21.854 05:17:13 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:21.854 05:17:13 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:21.854 05:17:13 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:21.854 05:17:13 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:21.854 05:17:13 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:21.854 05:17:13 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:21.854 05:17:13 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:21.854 05:17:13 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:21.854 05:17:13 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:21.854 05:17:13 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:21.854 05:17:13 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:21.854 05:17:13 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:21.855 05:17:13 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:21.855 05:17:13 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:21.855 05:17:13 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:21.855 05:17:13 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:21.855 05:17:13 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:21.855 05:17:13 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:21.855 05:17:13 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:15:21.855 05:17:13 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:15:21.855 05:17:13 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:15:21.855 05:17:13 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:15:21.855 05:17:13 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:21.855 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:21.855 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:21.855 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:21.855 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:21.855 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:21.855 05:17:13 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=84112 00:15:21.855 05:17:13 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:15:21.855 05:17:13 ftl -- ftl/ftl.sh@38 -- # waitforlisten 84112 00:15:21.855 05:17:13 ftl -- common/autotest_common.sh@831 -- # '[' -z 84112 ']' 00:15:21.855 05:17:13 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:21.855 05:17:13 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:21.855 05:17:13 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:21.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:21.855 05:17:13 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:21.855 05:17:13 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:21.855 [2024-11-10 05:17:13.886532] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:21.855 [2024-11-10 05:17:13.886791] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84112 ] 00:15:21.855 [2024-11-10 05:17:14.026418] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:21.855 [2024-11-10 05:17:14.055797] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:21.855 05:17:14 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:21.855 05:17:14 ftl -- common/autotest_common.sh@864 -- # return 0 00:15:21.855 05:17:14 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:15:21.855 05:17:14 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:15:22.113 05:17:15 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:15:22.113 05:17:15 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:22.679 05:17:15 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:15:22.679 05:17:15 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:15:22.679 05:17:15 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:15:22.679 05:17:15 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:15:22.679 05:17:15 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:15:22.679 05:17:15 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:15:22.679 05:17:15 ftl -- ftl/ftl.sh@50 -- # break 00:15:22.679 05:17:15 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:15:22.679 05:17:15 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:15:22.679 05:17:15 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:15:22.679 05:17:15 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:15:22.937 05:17:16 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:15:22.937 05:17:16 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:15:22.937 05:17:16 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:15:22.937 05:17:16 ftl -- ftl/ftl.sh@63 -- # break 00:15:22.937 05:17:16 ftl -- ftl/ftl.sh@66 -- # killprocess 84112 00:15:22.937 05:17:16 ftl -- common/autotest_common.sh@950 -- # '[' -z 84112 ']' 00:15:22.937 05:17:16 ftl -- common/autotest_common.sh@954 -- # kill -0 84112 00:15:22.937 05:17:16 ftl -- common/autotest_common.sh@955 -- # uname 00:15:22.937 05:17:16 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:22.937 05:17:16 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84112 00:15:22.937 killing process with pid 84112 00:15:22.937 05:17:16 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:22.937 05:17:16 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:22.937 05:17:16 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84112' 00:15:22.937 05:17:16 ftl -- common/autotest_common.sh@969 -- # kill 84112 00:15:22.937 05:17:16 ftl -- common/autotest_common.sh@974 -- # wait 84112 00:15:23.196 05:17:16 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:15:23.196 05:17:16 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:15:23.196 05:17:16 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:23.196 05:17:16 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:23.196 05:17:16 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:23.196 ************************************ 00:15:23.196 START TEST ftl_fio_basic 00:15:23.196 ************************************ 00:15:23.196 05:17:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:15:23.455 * Looking for test storage... 00:15:23.455 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:23.455 05:17:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:23.455 05:17:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # lcov --version 00:15:23.455 05:17:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:23.455 05:17:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:23.455 05:17:16 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:23.455 05:17:16 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:23.455 05:17:16 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:23.455 05:17:16 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:15:23.455 05:17:16 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:15:23.455 05:17:16 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:15:23.455 05:17:16 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:15:23.455 05:17:16 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:15:23.455 05:17:16 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:15:23.455 05:17:16 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:15:23.455 05:17:16 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:23.455 05:17:16 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:15:23.455 05:17:16 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:15:23.455 05:17:16 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:23.455 05:17:16 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:23.455 05:17:16 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:15:23.455 05:17:16 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:15:23.455 05:17:16 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:23.455 05:17:16 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:15:23.455 05:17:16 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:15:23.455 05:17:16 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:15:23.455 05:17:16 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:15:23.455 05:17:16 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:23.455 05:17:16 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:15:23.455 05:17:16 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:15:23.455 05:17:16 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:23.455 05:17:16 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:23.455 05:17:16 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:15:23.455 05:17:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:23.455 05:17:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:23.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.455 --rc genhtml_branch_coverage=1 00:15:23.455 --rc genhtml_function_coverage=1 00:15:23.455 --rc genhtml_legend=1 00:15:23.455 --rc geninfo_all_blocks=1 00:15:23.455 --rc geninfo_unexecuted_blocks=1 00:15:23.455 00:15:23.455 ' 00:15:23.455 05:17:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:23.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.455 --rc genhtml_branch_coverage=1 00:15:23.455 --rc genhtml_function_coverage=1 00:15:23.455 --rc genhtml_legend=1 00:15:23.455 --rc geninfo_all_blocks=1 00:15:23.455 --rc geninfo_unexecuted_blocks=1 00:15:23.455 00:15:23.455 ' 00:15:23.455 05:17:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:23.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.455 --rc genhtml_branch_coverage=1 00:15:23.455 --rc genhtml_function_coverage=1 00:15:23.455 --rc genhtml_legend=1 00:15:23.455 --rc geninfo_all_blocks=1 00:15:23.455 --rc geninfo_unexecuted_blocks=1 00:15:23.455 00:15:23.455 ' 00:15:23.455 05:17:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:23.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.455 --rc genhtml_branch_coverage=1 00:15:23.455 --rc genhtml_function_coverage=1 00:15:23.455 --rc genhtml_legend=1 00:15:23.455 --rc geninfo_all_blocks=1 00:15:23.455 --rc geninfo_unexecuted_blocks=1 00:15:23.455 00:15:23.455 ' 00:15:23.455 05:17:16 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:23.455 05:17:16 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:15:23.455 05:17:16 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:23.455 05:17:16 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:23.455 05:17:16 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:23.455 05:17:16 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:23.455 05:17:16 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:23.455 05:17:16 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:23.455 05:17:16 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:23.455 05:17:16 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:23.455 05:17:16 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:23.455 05:17:16 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:23.455 05:17:16 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:23.455 05:17:16 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:23.455 05:17:16 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:23.455 05:17:16 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:23.455 05:17:16 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:23.455 05:17:16 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:23.455 05:17:16 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:23.455 05:17:16 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:23.455 05:17:16 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:23.455 05:17:16 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:23.455 05:17:16 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:23.455 05:17:16 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:23.455 05:17:16 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:23.455 05:17:16 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:23.455 05:17:16 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:23.455 05:17:16 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:23.455 05:17:16 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:23.455 05:17:16 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:15:23.455 05:17:16 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:15:23.455 05:17:16 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:15:23.455 05:17:16 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:15:23.455 05:17:16 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:23.455 05:17:16 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:15:23.456 05:17:16 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:15:23.456 05:17:16 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:15:23.456 05:17:16 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:15:23.456 05:17:16 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:15:23.456 05:17:16 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:15:23.456 05:17:16 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:15:23.456 05:17:16 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:15:23.456 05:17:16 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:15:23.456 05:17:16 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:23.456 05:17:16 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:23.456 05:17:16 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:15:23.456 05:17:16 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:15:23.456 05:17:16 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=84233 00:15:23.456 05:17:16 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 84233 00:15:23.456 05:17:16 ftl.ftl_fio_basic -- common/autotest_common.sh@831 -- # '[' -z 84233 ']' 00:15:23.456 05:17:16 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:23.456 05:17:16 ftl.ftl_fio_basic -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:23.456 05:17:16 ftl.ftl_fio_basic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:23.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:23.456 05:17:16 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:23.456 05:17:16 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:23.456 [2024-11-10 05:17:16.609671] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:23.456 [2024-11-10 05:17:16.610254] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84233 ] 00:15:23.714 [2024-11-10 05:17:16.754637] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:23.714 [2024-11-10 05:17:16.785854] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:23.714 [2024-11-10 05:17:16.785884] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:23.714 [2024-11-10 05:17:16.785949] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:15:24.279 05:17:17 ftl.ftl_fio_basic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:24.279 05:17:17 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # return 0 00:15:24.279 05:17:17 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:15:24.279 05:17:17 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:15:24.279 05:17:17 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:15:24.279 05:17:17 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:15:24.279 05:17:17 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:15:24.279 05:17:17 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:15:24.537 05:17:17 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:15:24.537 05:17:17 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:15:24.537 05:17:17 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:15:24.537 05:17:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:15:24.537 05:17:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:24.537 05:17:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:15:24.537 05:17:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:15:24.537 05:17:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:15:24.795 05:17:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:24.795 { 00:15:24.795 "name": "nvme0n1", 00:15:24.795 "aliases": [ 00:15:24.795 "2a5556b6-ee99-4da3-871f-78a8c52337f6" 00:15:24.795 ], 00:15:24.795 "product_name": "NVMe disk", 00:15:24.795 "block_size": 4096, 00:15:24.795 "num_blocks": 1310720, 00:15:24.795 "uuid": "2a5556b6-ee99-4da3-871f-78a8c52337f6", 00:15:24.795 "numa_id": -1, 00:15:24.795 "assigned_rate_limits": { 00:15:24.795 "rw_ios_per_sec": 0, 00:15:24.795 "rw_mbytes_per_sec": 0, 00:15:24.795 "r_mbytes_per_sec": 0, 00:15:24.795 "w_mbytes_per_sec": 0 00:15:24.795 }, 00:15:24.795 "claimed": false, 00:15:24.795 "zoned": false, 00:15:24.795 "supported_io_types": { 00:15:24.795 "read": true, 00:15:24.795 "write": true, 00:15:24.795 "unmap": true, 00:15:24.795 "flush": true, 00:15:24.795 "reset": true, 00:15:24.795 "nvme_admin": true, 00:15:24.795 "nvme_io": true, 00:15:24.795 "nvme_io_md": false, 00:15:24.795 "write_zeroes": true, 00:15:24.795 "zcopy": false, 00:15:24.795 "get_zone_info": false, 00:15:24.795 "zone_management": false, 00:15:24.795 "zone_append": false, 00:15:24.795 "compare": true, 00:15:24.795 "compare_and_write": false, 00:15:24.795 "abort": true, 00:15:24.795 "seek_hole": false, 00:15:24.795 "seek_data": false, 00:15:24.795 "copy": true, 00:15:24.795 "nvme_iov_md": false 00:15:24.795 }, 00:15:24.795 "driver_specific": { 00:15:24.795 "nvme": [ 00:15:24.795 { 00:15:24.795 "pci_address": "0000:00:11.0", 00:15:24.795 "trid": { 00:15:24.795 "trtype": "PCIe", 00:15:24.795 "traddr": "0000:00:11.0" 00:15:24.795 }, 00:15:24.795 "ctrlr_data": { 00:15:24.795 "cntlid": 0, 00:15:24.795 "vendor_id": "0x1b36", 00:15:24.795 "model_number": "QEMU NVMe Ctrl", 00:15:24.795 "serial_number": "12341", 00:15:24.795 "firmware_revision": "8.0.0", 00:15:24.795 "subnqn": "nqn.2019-08.org.qemu:12341", 00:15:24.795 "oacs": { 00:15:24.795 "security": 0, 00:15:24.795 "format": 1, 00:15:24.795 "firmware": 0, 00:15:24.795 "ns_manage": 1 00:15:24.795 }, 00:15:24.795 "multi_ctrlr": false, 00:15:24.795 "ana_reporting": false 00:15:24.795 }, 00:15:24.795 "vs": { 00:15:24.796 "nvme_version": "1.4" 00:15:24.796 }, 00:15:24.796 "ns_data": { 00:15:24.796 "id": 1, 00:15:24.796 "can_share": false 00:15:24.796 } 00:15:24.796 } 00:15:24.796 ], 00:15:24.796 "mp_policy": "active_passive" 00:15:24.796 } 00:15:24.796 } 00:15:24.796 ]' 00:15:24.796 05:17:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:24.796 05:17:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:15:24.796 05:17:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:24.796 05:17:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=1310720 00:15:24.796 05:17:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:15:24.796 05:17:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 5120 00:15:24.796 05:17:17 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:15:24.796 05:17:17 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:15:24.796 05:17:17 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:15:24.796 05:17:17 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:15:24.796 05:17:17 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:25.053 05:17:18 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:15:25.053 05:17:18 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:15:25.310 05:17:18 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=180524a7-d395-4c74-8a2e-24884e7367a3 00:15:25.310 05:17:18 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 180524a7-d395-4c74-8a2e-24884e7367a3 00:15:25.310 05:17:18 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=0df03a9f-14a2-4605-888b-8d65dec23cca 00:15:25.310 05:17:18 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 0df03a9f-14a2-4605-888b-8d65dec23cca 00:15:25.310 05:17:18 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:15:25.310 05:17:18 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:15:25.310 05:17:18 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=0df03a9f-14a2-4605-888b-8d65dec23cca 00:15:25.310 05:17:18 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:15:25.310 05:17:18 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 0df03a9f-14a2-4605-888b-8d65dec23cca 00:15:25.310 05:17:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=0df03a9f-14a2-4605-888b-8d65dec23cca 00:15:25.310 05:17:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:25.310 05:17:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:15:25.310 05:17:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:15:25.310 05:17:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0df03a9f-14a2-4605-888b-8d65dec23cca 00:15:25.568 05:17:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:25.568 { 00:15:25.568 "name": "0df03a9f-14a2-4605-888b-8d65dec23cca", 00:15:25.568 "aliases": [ 00:15:25.568 "lvs/nvme0n1p0" 00:15:25.568 ], 00:15:25.568 "product_name": "Logical Volume", 00:15:25.568 "block_size": 4096, 00:15:25.568 "num_blocks": 26476544, 00:15:25.568 "uuid": "0df03a9f-14a2-4605-888b-8d65dec23cca", 00:15:25.568 "assigned_rate_limits": { 00:15:25.568 "rw_ios_per_sec": 0, 00:15:25.568 "rw_mbytes_per_sec": 0, 00:15:25.568 "r_mbytes_per_sec": 0, 00:15:25.568 "w_mbytes_per_sec": 0 00:15:25.568 }, 00:15:25.568 "claimed": false, 00:15:25.568 "zoned": false, 00:15:25.568 "supported_io_types": { 00:15:25.568 "read": true, 00:15:25.568 "write": true, 00:15:25.568 "unmap": true, 00:15:25.568 "flush": false, 00:15:25.568 "reset": true, 00:15:25.568 "nvme_admin": false, 00:15:25.568 "nvme_io": false, 00:15:25.568 "nvme_io_md": false, 00:15:25.568 "write_zeroes": true, 00:15:25.568 "zcopy": false, 00:15:25.568 "get_zone_info": false, 00:15:25.568 "zone_management": false, 00:15:25.568 "zone_append": false, 00:15:25.568 "compare": false, 00:15:25.568 "compare_and_write": false, 00:15:25.568 "abort": false, 00:15:25.568 "seek_hole": true, 00:15:25.568 "seek_data": true, 00:15:25.568 "copy": false, 00:15:25.568 "nvme_iov_md": false 00:15:25.568 }, 00:15:25.568 "driver_specific": { 00:15:25.568 "lvol": { 00:15:25.568 "lvol_store_uuid": "180524a7-d395-4c74-8a2e-24884e7367a3", 00:15:25.568 "base_bdev": "nvme0n1", 00:15:25.568 "thin_provision": true, 00:15:25.568 "num_allocated_clusters": 0, 00:15:25.568 "snapshot": false, 00:15:25.568 "clone": false, 00:15:25.568 "esnap_clone": false 00:15:25.568 } 00:15:25.568 } 00:15:25.568 } 00:15:25.568 ]' 00:15:25.568 05:17:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:25.568 05:17:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:15:25.568 05:17:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:25.825 05:17:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:15:25.825 05:17:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:15:25.825 05:17:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:15:25.825 05:17:18 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:15:25.825 05:17:18 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:15:25.825 05:17:18 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:15:26.083 05:17:19 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:15:26.083 05:17:19 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:15:26.083 05:17:19 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 0df03a9f-14a2-4605-888b-8d65dec23cca 00:15:26.083 05:17:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=0df03a9f-14a2-4605-888b-8d65dec23cca 00:15:26.083 05:17:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:26.083 05:17:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:15:26.083 05:17:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:15:26.083 05:17:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0df03a9f-14a2-4605-888b-8d65dec23cca 00:15:26.083 05:17:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:26.083 { 00:15:26.083 "name": "0df03a9f-14a2-4605-888b-8d65dec23cca", 00:15:26.083 "aliases": [ 00:15:26.083 "lvs/nvme0n1p0" 00:15:26.083 ], 00:15:26.083 "product_name": "Logical Volume", 00:15:26.083 "block_size": 4096, 00:15:26.083 "num_blocks": 26476544, 00:15:26.083 "uuid": "0df03a9f-14a2-4605-888b-8d65dec23cca", 00:15:26.083 "assigned_rate_limits": { 00:15:26.083 "rw_ios_per_sec": 0, 00:15:26.083 "rw_mbytes_per_sec": 0, 00:15:26.083 "r_mbytes_per_sec": 0, 00:15:26.083 "w_mbytes_per_sec": 0 00:15:26.083 }, 00:15:26.083 "claimed": false, 00:15:26.083 "zoned": false, 00:15:26.083 "supported_io_types": { 00:15:26.083 "read": true, 00:15:26.083 "write": true, 00:15:26.083 "unmap": true, 00:15:26.083 "flush": false, 00:15:26.083 "reset": true, 00:15:26.083 "nvme_admin": false, 00:15:26.083 "nvme_io": false, 00:15:26.083 "nvme_io_md": false, 00:15:26.083 "write_zeroes": true, 00:15:26.083 "zcopy": false, 00:15:26.083 "get_zone_info": false, 00:15:26.083 "zone_management": false, 00:15:26.083 "zone_append": false, 00:15:26.083 "compare": false, 00:15:26.083 "compare_and_write": false, 00:15:26.083 "abort": false, 00:15:26.083 "seek_hole": true, 00:15:26.083 "seek_data": true, 00:15:26.083 "copy": false, 00:15:26.083 "nvme_iov_md": false 00:15:26.083 }, 00:15:26.083 "driver_specific": { 00:15:26.083 "lvol": { 00:15:26.083 "lvol_store_uuid": "180524a7-d395-4c74-8a2e-24884e7367a3", 00:15:26.083 "base_bdev": "nvme0n1", 00:15:26.083 "thin_provision": true, 00:15:26.083 "num_allocated_clusters": 0, 00:15:26.083 "snapshot": false, 00:15:26.083 "clone": false, 00:15:26.083 "esnap_clone": false 00:15:26.083 } 00:15:26.083 } 00:15:26.083 } 00:15:26.083 ]' 00:15:26.083 05:17:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:26.083 05:17:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:15:26.083 05:17:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:26.342 05:17:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:15:26.342 05:17:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:15:26.342 05:17:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:15:26.342 05:17:19 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:15:26.342 05:17:19 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:15:26.342 05:17:19 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:15:26.342 05:17:19 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:15:26.342 05:17:19 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:15:26.342 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:15:26.342 05:17:19 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 0df03a9f-14a2-4605-888b-8d65dec23cca 00:15:26.342 05:17:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=0df03a9f-14a2-4605-888b-8d65dec23cca 00:15:26.342 05:17:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:26.342 05:17:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:15:26.342 05:17:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:15:26.342 05:17:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0df03a9f-14a2-4605-888b-8d65dec23cca 00:15:26.603 05:17:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:26.603 { 00:15:26.603 "name": "0df03a9f-14a2-4605-888b-8d65dec23cca", 00:15:26.603 "aliases": [ 00:15:26.603 "lvs/nvme0n1p0" 00:15:26.603 ], 00:15:26.603 "product_name": "Logical Volume", 00:15:26.603 "block_size": 4096, 00:15:26.603 "num_blocks": 26476544, 00:15:26.603 "uuid": "0df03a9f-14a2-4605-888b-8d65dec23cca", 00:15:26.603 "assigned_rate_limits": { 00:15:26.603 "rw_ios_per_sec": 0, 00:15:26.603 "rw_mbytes_per_sec": 0, 00:15:26.603 "r_mbytes_per_sec": 0, 00:15:26.603 "w_mbytes_per_sec": 0 00:15:26.603 }, 00:15:26.603 "claimed": false, 00:15:26.603 "zoned": false, 00:15:26.603 "supported_io_types": { 00:15:26.603 "read": true, 00:15:26.603 "write": true, 00:15:26.603 "unmap": true, 00:15:26.603 "flush": false, 00:15:26.603 "reset": true, 00:15:26.603 "nvme_admin": false, 00:15:26.603 "nvme_io": false, 00:15:26.603 "nvme_io_md": false, 00:15:26.603 "write_zeroes": true, 00:15:26.603 "zcopy": false, 00:15:26.603 "get_zone_info": false, 00:15:26.603 "zone_management": false, 00:15:26.603 "zone_append": false, 00:15:26.603 "compare": false, 00:15:26.603 "compare_and_write": false, 00:15:26.603 "abort": false, 00:15:26.603 "seek_hole": true, 00:15:26.603 "seek_data": true, 00:15:26.603 "copy": false, 00:15:26.603 "nvme_iov_md": false 00:15:26.603 }, 00:15:26.603 "driver_specific": { 00:15:26.603 "lvol": { 00:15:26.603 "lvol_store_uuid": "180524a7-d395-4c74-8a2e-24884e7367a3", 00:15:26.603 "base_bdev": "nvme0n1", 00:15:26.603 "thin_provision": true, 00:15:26.603 "num_allocated_clusters": 0, 00:15:26.603 "snapshot": false, 00:15:26.603 "clone": false, 00:15:26.603 "esnap_clone": false 00:15:26.603 } 00:15:26.603 } 00:15:26.603 } 00:15:26.603 ]' 00:15:26.603 05:17:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:26.603 05:17:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:15:26.603 05:17:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:26.603 05:17:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:15:26.603 05:17:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:15:26.603 05:17:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:15:26.603 05:17:19 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:15:26.603 05:17:19 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:15:26.603 05:17:19 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 0df03a9f-14a2-4605-888b-8d65dec23cca -c nvc0n1p0 --l2p_dram_limit 60 00:15:26.863 [2024-11-10 05:17:19.974988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.863 [2024-11-10 05:17:19.975046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:26.863 [2024-11-10 05:17:19.975059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:26.863 [2024-11-10 05:17:19.975069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.864 [2024-11-10 05:17:19.975129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.864 [2024-11-10 05:17:19.975141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:26.864 [2024-11-10 05:17:19.975152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:15:26.864 [2024-11-10 05:17:19.975163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.864 [2024-11-10 05:17:19.975195] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:26.864 [2024-11-10 05:17:19.975507] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:26.864 [2024-11-10 05:17:19.975524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.864 [2024-11-10 05:17:19.975533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:26.864 [2024-11-10 05:17:19.975549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.344 ms 00:15:26.864 [2024-11-10 05:17:19.975558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.864 [2024-11-10 05:17:19.975626] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 299db392-82e3-459b-b2d4-a430c3530dc3 00:15:26.864 [2024-11-10 05:17:19.976709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.864 [2024-11-10 05:17:19.976743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:15:26.864 [2024-11-10 05:17:19.976758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:15:26.864 [2024-11-10 05:17:19.976766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.864 [2024-11-10 05:17:19.981773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.864 [2024-11-10 05:17:19.981912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:26.864 [2024-11-10 05:17:19.981929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.948 ms 00:15:26.864 [2024-11-10 05:17:19.981948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.864 [2024-11-10 05:17:19.982067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.864 [2024-11-10 05:17:19.982080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:26.864 [2024-11-10 05:17:19.982091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:15:26.864 [2024-11-10 05:17:19.982098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.864 [2024-11-10 05:17:19.982154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.864 [2024-11-10 05:17:19.982164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:26.864 [2024-11-10 05:17:19.982174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:15:26.864 [2024-11-10 05:17:19.982181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.864 [2024-11-10 05:17:19.982220] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:26.864 [2024-11-10 05:17:19.983625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.864 [2024-11-10 05:17:19.983656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:26.864 [2024-11-10 05:17:19.983665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.421 ms 00:15:26.864 [2024-11-10 05:17:19.983675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.864 [2024-11-10 05:17:19.983724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.864 [2024-11-10 05:17:19.983733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:26.864 [2024-11-10 05:17:19.983749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:15:26.864 [2024-11-10 05:17:19.983760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.864 [2024-11-10 05:17:19.983780] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:15:26.864 [2024-11-10 05:17:19.983926] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:15:26.864 [2024-11-10 05:17:19.983944] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:26.864 [2024-11-10 05:17:19.983956] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:15:26.864 [2024-11-10 05:17:19.983966] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:26.864 [2024-11-10 05:17:19.983977] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:26.864 [2024-11-10 05:17:19.983985] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:15:26.864 [2024-11-10 05:17:19.984008] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:26.864 [2024-11-10 05:17:19.984015] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:15:26.864 [2024-11-10 05:17:19.984024] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:15:26.864 [2024-11-10 05:17:19.984032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.864 [2024-11-10 05:17:19.984049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:26.864 [2024-11-10 05:17:19.984056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.253 ms 00:15:26.864 [2024-11-10 05:17:19.984065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.864 [2024-11-10 05:17:19.984155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.864 [2024-11-10 05:17:19.984166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:26.864 [2024-11-10 05:17:19.984174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:15:26.864 [2024-11-10 05:17:19.984182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.864 [2024-11-10 05:17:19.984297] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:26.864 [2024-11-10 05:17:19.984309] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:26.864 [2024-11-10 05:17:19.984318] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:26.864 [2024-11-10 05:17:19.984328] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:26.864 [2024-11-10 05:17:19.984336] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:26.864 [2024-11-10 05:17:19.984346] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:26.864 [2024-11-10 05:17:19.984354] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:15:26.864 [2024-11-10 05:17:19.984363] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:26.864 [2024-11-10 05:17:19.984372] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:15:26.864 [2024-11-10 05:17:19.984381] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:26.864 [2024-11-10 05:17:19.984392] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:26.864 [2024-11-10 05:17:19.984403] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:15:26.864 [2024-11-10 05:17:19.984411] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:26.864 [2024-11-10 05:17:19.984422] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:26.864 [2024-11-10 05:17:19.984430] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:15:26.864 [2024-11-10 05:17:19.984439] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:26.864 [2024-11-10 05:17:19.984447] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:26.864 [2024-11-10 05:17:19.984457] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:15:26.864 [2024-11-10 05:17:19.984464] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:26.864 [2024-11-10 05:17:19.984473] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:26.864 [2024-11-10 05:17:19.984492] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:15:26.864 [2024-11-10 05:17:19.984501] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:26.864 [2024-11-10 05:17:19.984508] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:26.864 [2024-11-10 05:17:19.984518] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:15:26.864 [2024-11-10 05:17:19.984525] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:26.864 [2024-11-10 05:17:19.984534] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:26.864 [2024-11-10 05:17:19.984541] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:15:26.864 [2024-11-10 05:17:19.984550] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:26.864 [2024-11-10 05:17:19.984558] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:26.864 [2024-11-10 05:17:19.984569] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:15:26.864 [2024-11-10 05:17:19.984577] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:26.864 [2024-11-10 05:17:19.984586] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:26.864 [2024-11-10 05:17:19.984593] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:15:26.864 [2024-11-10 05:17:19.984602] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:26.864 [2024-11-10 05:17:19.984609] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:26.864 [2024-11-10 05:17:19.984620] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:15:26.864 [2024-11-10 05:17:19.984627] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:26.864 [2024-11-10 05:17:19.984636] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:15:26.864 [2024-11-10 05:17:19.984644] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:15:26.864 [2024-11-10 05:17:19.984653] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:26.864 [2024-11-10 05:17:19.984660] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:15:26.864 [2024-11-10 05:17:19.984670] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:15:26.864 [2024-11-10 05:17:19.984678] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:26.864 [2024-11-10 05:17:19.984687] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:26.864 [2024-11-10 05:17:19.984695] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:26.864 [2024-11-10 05:17:19.984706] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:26.864 [2024-11-10 05:17:19.984712] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:26.864 [2024-11-10 05:17:19.984722] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:26.864 [2024-11-10 05:17:19.984729] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:26.864 [2024-11-10 05:17:19.984738] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:26.865 [2024-11-10 05:17:19.984745] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:26.865 [2024-11-10 05:17:19.984753] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:26.865 [2024-11-10 05:17:19.984759] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:26.865 [2024-11-10 05:17:19.984770] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:26.865 [2024-11-10 05:17:19.984781] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:26.865 [2024-11-10 05:17:19.984791] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:15:26.865 [2024-11-10 05:17:19.984798] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:15:26.865 [2024-11-10 05:17:19.984807] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:15:26.865 [2024-11-10 05:17:19.984814] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:15:26.865 [2024-11-10 05:17:19.984824] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:15:26.865 [2024-11-10 05:17:19.984831] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:15:26.865 [2024-11-10 05:17:19.984841] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:15:26.865 [2024-11-10 05:17:19.984848] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:15:26.865 [2024-11-10 05:17:19.984856] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:15:26.865 [2024-11-10 05:17:19.984863] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:15:26.865 [2024-11-10 05:17:19.984871] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:15:26.865 [2024-11-10 05:17:19.984878] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:15:26.865 [2024-11-10 05:17:19.984887] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:15:26.865 [2024-11-10 05:17:19.984894] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:15:26.865 [2024-11-10 05:17:19.984902] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:26.865 [2024-11-10 05:17:19.984910] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:26.865 [2024-11-10 05:17:19.984919] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:26.865 [2024-11-10 05:17:19.984927] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:26.865 [2024-11-10 05:17:19.984935] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:26.865 [2024-11-10 05:17:19.984944] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:26.865 [2024-11-10 05:17:19.984953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.865 [2024-11-10 05:17:19.984961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:26.865 [2024-11-10 05:17:19.984971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.722 ms 00:15:26.865 [2024-11-10 05:17:19.984978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.865 [2024-11-10 05:17:19.985332] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:15:26.865 [2024-11-10 05:17:19.985385] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:15:29.397 [2024-11-10 05:17:22.387011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:29.397 [2024-11-10 05:17:22.387203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:15:29.397 [2024-11-10 05:17:22.387289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2401.666 ms 00:15:29.397 [2024-11-10 05:17:22.387334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.397 [2024-11-10 05:17:22.403279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:29.397 [2024-11-10 05:17:22.403496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:29.397 [2024-11-10 05:17:22.403601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.799 ms 00:15:29.397 [2024-11-10 05:17:22.403689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.397 [2024-11-10 05:17:22.403893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:29.397 [2024-11-10 05:17:22.403960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:29.397 [2024-11-10 05:17:22.404073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:15:29.397 [2024-11-10 05:17:22.404176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.397 [2024-11-10 05:17:22.414527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:29.397 [2024-11-10 05:17:22.414691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:29.397 [2024-11-10 05:17:22.414780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.238 ms 00:15:29.397 [2024-11-10 05:17:22.414817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.397 [2024-11-10 05:17:22.415313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:29.397 [2024-11-10 05:17:22.415339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:29.397 [2024-11-10 05:17:22.415353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:15:29.397 [2024-11-10 05:17:22.415361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.397 [2024-11-10 05:17:22.415695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:29.397 [2024-11-10 05:17:22.415721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:29.397 [2024-11-10 05:17:22.415752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:15:29.397 [2024-11-10 05:17:22.415760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.397 [2024-11-10 05:17:22.415890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:29.397 [2024-11-10 05:17:22.415899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:29.397 [2024-11-10 05:17:22.415909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:15:29.397 [2024-11-10 05:17:22.415916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.397 [2024-11-10 05:17:22.421057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:29.397 [2024-11-10 05:17:22.421185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:29.397 [2024-11-10 05:17:22.421203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.114 ms 00:15:29.397 [2024-11-10 05:17:22.421211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.397 [2024-11-10 05:17:22.429319] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:15:29.397 [2024-11-10 05:17:22.443042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:29.397 [2024-11-10 05:17:22.443161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:29.398 [2024-11-10 05:17:22.443176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.768 ms 00:15:29.398 [2024-11-10 05:17:22.443185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.398 [2024-11-10 05:17:22.480450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:29.398 [2024-11-10 05:17:22.480489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:15:29.398 [2024-11-10 05:17:22.480503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.233 ms 00:15:29.398 [2024-11-10 05:17:22.480514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.398 [2024-11-10 05:17:22.480693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:29.398 [2024-11-10 05:17:22.480705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:29.398 [2024-11-10 05:17:22.480716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:15:29.398 [2024-11-10 05:17:22.480735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.398 [2024-11-10 05:17:22.483412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:29.398 [2024-11-10 05:17:22.483445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:15:29.398 [2024-11-10 05:17:22.483454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.653 ms 00:15:29.398 [2024-11-10 05:17:22.483475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.398 [2024-11-10 05:17:22.485682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:29.398 [2024-11-10 05:17:22.485714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:15:29.398 [2024-11-10 05:17:22.485725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.172 ms 00:15:29.398 [2024-11-10 05:17:22.485735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.398 [2024-11-10 05:17:22.486047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:29.398 [2024-11-10 05:17:22.486075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:29.398 [2024-11-10 05:17:22.486085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.276 ms 00:15:29.398 [2024-11-10 05:17:22.486097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.398 [2024-11-10 05:17:22.507698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:29.398 [2024-11-10 05:17:22.507858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:15:29.398 [2024-11-10 05:17:22.507898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.571 ms 00:15:29.398 [2024-11-10 05:17:22.507924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.398 [2024-11-10 05:17:22.515167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:29.398 [2024-11-10 05:17:22.515254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:15:29.398 [2024-11-10 05:17:22.515280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.020 ms 00:15:29.398 [2024-11-10 05:17:22.515305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.398 [2024-11-10 05:17:22.521060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:29.398 [2024-11-10 05:17:22.521095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:15:29.398 [2024-11-10 05:17:22.521105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.670 ms 00:15:29.398 [2024-11-10 05:17:22.521115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.398 [2024-11-10 05:17:22.524278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:29.398 [2024-11-10 05:17:22.524316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:29.398 [2024-11-10 05:17:22.524325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.124 ms 00:15:29.398 [2024-11-10 05:17:22.524338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.398 [2024-11-10 05:17:22.524379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:29.398 [2024-11-10 05:17:22.524390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:29.398 [2024-11-10 05:17:22.524398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:15:29.398 [2024-11-10 05:17:22.524407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.398 [2024-11-10 05:17:22.524474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:29.398 [2024-11-10 05:17:22.524485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:29.398 [2024-11-10 05:17:22.524492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:15:29.398 [2024-11-10 05:17:22.524504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.398 [2024-11-10 05:17:22.525731] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2549.972 ms, result 0 00:15:29.398 { 00:15:29.398 "name": "ftl0", 00:15:29.398 "uuid": "299db392-82e3-459b-b2d4-a430c3530dc3" 00:15:29.398 } 00:15:29.398 05:17:22 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:15:29.398 05:17:22 ftl.ftl_fio_basic -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:15:29.398 05:17:22 ftl.ftl_fio_basic -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:29.398 05:17:22 ftl.ftl_fio_basic -- common/autotest_common.sh@901 -- # local i 00:15:29.398 05:17:22 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:29.398 05:17:22 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:29.398 05:17:22 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:29.657 05:17:22 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:15:29.916 [ 00:15:29.916 { 00:15:29.916 "name": "ftl0", 00:15:29.916 "aliases": [ 00:15:29.916 "299db392-82e3-459b-b2d4-a430c3530dc3" 00:15:29.916 ], 00:15:29.916 "product_name": "FTL disk", 00:15:29.916 "block_size": 4096, 00:15:29.916 "num_blocks": 20971520, 00:15:29.916 "uuid": "299db392-82e3-459b-b2d4-a430c3530dc3", 00:15:29.916 "assigned_rate_limits": { 00:15:29.916 "rw_ios_per_sec": 0, 00:15:29.916 "rw_mbytes_per_sec": 0, 00:15:29.916 "r_mbytes_per_sec": 0, 00:15:29.916 "w_mbytes_per_sec": 0 00:15:29.916 }, 00:15:29.916 "claimed": false, 00:15:29.916 "zoned": false, 00:15:29.916 "supported_io_types": { 00:15:29.916 "read": true, 00:15:29.916 "write": true, 00:15:29.916 "unmap": true, 00:15:29.916 "flush": true, 00:15:29.916 "reset": false, 00:15:29.916 "nvme_admin": false, 00:15:29.916 "nvme_io": false, 00:15:29.916 "nvme_io_md": false, 00:15:29.916 "write_zeroes": true, 00:15:29.916 "zcopy": false, 00:15:29.916 "get_zone_info": false, 00:15:29.916 "zone_management": false, 00:15:29.916 "zone_append": false, 00:15:29.916 "compare": false, 00:15:29.916 "compare_and_write": false, 00:15:29.916 "abort": false, 00:15:29.916 "seek_hole": false, 00:15:29.916 "seek_data": false, 00:15:29.916 "copy": false, 00:15:29.916 "nvme_iov_md": false 00:15:29.916 }, 00:15:29.916 "driver_specific": { 00:15:29.916 "ftl": { 00:15:29.916 "base_bdev": "0df03a9f-14a2-4605-888b-8d65dec23cca", 00:15:29.916 "cache": "nvc0n1p0" 00:15:29.916 } 00:15:29.916 } 00:15:29.916 } 00:15:29.916 ] 00:15:29.916 05:17:22 ftl.ftl_fio_basic -- common/autotest_common.sh@907 -- # return 0 00:15:29.916 05:17:22 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:15:29.916 05:17:22 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:15:29.916 05:17:23 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:15:29.916 05:17:23 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:15:30.176 [2024-11-10 05:17:23.324223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:30.176 [2024-11-10 05:17:23.324415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:15:30.176 [2024-11-10 05:17:23.324445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:30.177 [2024-11-10 05:17:23.324455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:30.177 [2024-11-10 05:17:23.324488] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:30.177 [2024-11-10 05:17:23.324908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:30.177 [2024-11-10 05:17:23.324938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:15:30.177 [2024-11-10 05:17:23.324948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.406 ms 00:15:30.177 [2024-11-10 05:17:23.324958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:30.177 [2024-11-10 05:17:23.325366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:30.177 [2024-11-10 05:17:23.325391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:15:30.177 [2024-11-10 05:17:23.325400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.381 ms 00:15:30.177 [2024-11-10 05:17:23.325409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:30.177 [2024-11-10 05:17:23.328654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:30.177 [2024-11-10 05:17:23.328689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:15:30.177 [2024-11-10 05:17:23.328699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.222 ms 00:15:30.177 [2024-11-10 05:17:23.328708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:30.177 [2024-11-10 05:17:23.334899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:30.177 [2024-11-10 05:17:23.334955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:15:30.177 [2024-11-10 05:17:23.334964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.169 ms 00:15:30.177 [2024-11-10 05:17:23.334973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:30.177 [2024-11-10 05:17:23.336399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:30.177 [2024-11-10 05:17:23.336438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:15:30.177 [2024-11-10 05:17:23.336448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.344 ms 00:15:30.177 [2024-11-10 05:17:23.336456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:30.177 [2024-11-10 05:17:23.339757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:30.177 [2024-11-10 05:17:23.339793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:15:30.177 [2024-11-10 05:17:23.339803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.266 ms 00:15:30.177 [2024-11-10 05:17:23.339814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:30.177 [2024-11-10 05:17:23.339960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:30.177 [2024-11-10 05:17:23.339971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:15:30.177 [2024-11-10 05:17:23.339979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:15:30.177 [2024-11-10 05:17:23.340004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:30.177 [2024-11-10 05:17:23.341486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:30.177 [2024-11-10 05:17:23.341520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:15:30.177 [2024-11-10 05:17:23.341529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.460 ms 00:15:30.177 [2024-11-10 05:17:23.341537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:30.177 [2024-11-10 05:17:23.342589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:30.177 [2024-11-10 05:17:23.342717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:15:30.177 [2024-11-10 05:17:23.342731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.018 ms 00:15:30.177 [2024-11-10 05:17:23.342740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:30.177 [2024-11-10 05:17:23.343529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:30.177 [2024-11-10 05:17:23.343558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:15:30.177 [2024-11-10 05:17:23.343567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.753 ms 00:15:30.177 [2024-11-10 05:17:23.343576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:30.177 [2024-11-10 05:17:23.344525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:30.177 [2024-11-10 05:17:23.344560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:15:30.177 [2024-11-10 05:17:23.344568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.876 ms 00:15:30.177 [2024-11-10 05:17:23.344577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:30.177 [2024-11-10 05:17:23.344609] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:15:30.177 [2024-11-10 05:17:23.344624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:15:30.177 [2024-11-10 05:17:23.344633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:15:30.177 [2024-11-10 05:17:23.344644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:15:30.177 [2024-11-10 05:17:23.344652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:15:30.177 [2024-11-10 05:17:23.344663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:15:30.177 [2024-11-10 05:17:23.344670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:15:30.177 [2024-11-10 05:17:23.344679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:15:30.177 [2024-11-10 05:17:23.344686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:15:30.177 [2024-11-10 05:17:23.344696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:15:30.177 [2024-11-10 05:17:23.344703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:15:30.177 [2024-11-10 05:17:23.344712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:15:30.177 [2024-11-10 05:17:23.344719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:15:30.177 [2024-11-10 05:17:23.344728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:15:30.177 [2024-11-10 05:17:23.344735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:15:30.177 [2024-11-10 05:17:23.344744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:15:30.177 [2024-11-10 05:17:23.344751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:15:30.177 [2024-11-10 05:17:23.344760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:15:30.177 [2024-11-10 05:17:23.344768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:15:30.177 [2024-11-10 05:17:23.344776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:15:30.177 [2024-11-10 05:17:23.344783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:15:30.177 [2024-11-10 05:17:23.344794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:15:30.177 [2024-11-10 05:17:23.344801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:15:30.177 [2024-11-10 05:17:23.344810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:15:30.177 [2024-11-10 05:17:23.344817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:15:30.177 [2024-11-10 05:17:23.344826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:15:30.177 [2024-11-10 05:17:23.344833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:15:30.177 [2024-11-10 05:17:23.344843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:15:30.177 [2024-11-10 05:17:23.344850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:15:30.177 [2024-11-10 05:17:23.344860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:15:30.177 [2024-11-10 05:17:23.344867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:15:30.177 [2024-11-10 05:17:23.344876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:15:30.177 [2024-11-10 05:17:23.344883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:15:30.177 [2024-11-10 05:17:23.344892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:15:30.177 [2024-11-10 05:17:23.344899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:15:30.177 [2024-11-10 05:17:23.344913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:15:30.177 [2024-11-10 05:17:23.344922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:15:30.177 [2024-11-10 05:17:23.344932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:15:30.177 [2024-11-10 05:17:23.344940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:15:30.177 [2024-11-10 05:17:23.344948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:15:30.177 [2024-11-10 05:17:23.344956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:15:30.177 [2024-11-10 05:17:23.344964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:15:30.177 [2024-11-10 05:17:23.344972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:15:30.177 [2024-11-10 05:17:23.344980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:15:30.177 [2024-11-10 05:17:23.344988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:15:30.177 [2024-11-10 05:17:23.345007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:15:30.177 [2024-11-10 05:17:23.345015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:15:30.177 [2024-11-10 05:17:23.345023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:15:30.177 [2024-11-10 05:17:23.345031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:15:30.177 [2024-11-10 05:17:23.345039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:15:30.177 [2024-11-10 05:17:23.345047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:15:30.178 [2024-11-10 05:17:23.345056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:15:30.178 [2024-11-10 05:17:23.345063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:15:30.178 [2024-11-10 05:17:23.345073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:15:30.178 [2024-11-10 05:17:23.345081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:15:30.178 [2024-11-10 05:17:23.345091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:15:30.178 [2024-11-10 05:17:23.345099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:15:30.178 [2024-11-10 05:17:23.345108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:15:30.178 [2024-11-10 05:17:23.345115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:15:30.178 [2024-11-10 05:17:23.345124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:15:30.178 [2024-11-10 05:17:23.345131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:15:30.178 [2024-11-10 05:17:23.345140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:15:30.178 [2024-11-10 05:17:23.345147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:15:30.178 [2024-11-10 05:17:23.345156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:15:30.178 [2024-11-10 05:17:23.345163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:15:30.178 [2024-11-10 05:17:23.345172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:15:30.178 [2024-11-10 05:17:23.345179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:15:30.178 [2024-11-10 05:17:23.345190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:15:30.178 [2024-11-10 05:17:23.345198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:15:30.178 [2024-11-10 05:17:23.345209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:15:30.178 [2024-11-10 05:17:23.345216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:15:30.178 [2024-11-10 05:17:23.345225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:15:30.178 [2024-11-10 05:17:23.345232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:15:30.178 [2024-11-10 05:17:23.345241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:15:30.178 [2024-11-10 05:17:23.345248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:15:30.178 [2024-11-10 05:17:23.345257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:15:30.178 [2024-11-10 05:17:23.345264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:15:30.178 [2024-11-10 05:17:23.345273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:15:30.178 [2024-11-10 05:17:23.345281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:15:30.178 [2024-11-10 05:17:23.345289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:15:30.178 [2024-11-10 05:17:23.345296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:15:30.178 [2024-11-10 05:17:23.345305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:15:30.178 [2024-11-10 05:17:23.345312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:15:30.178 [2024-11-10 05:17:23.345322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:15:30.178 [2024-11-10 05:17:23.345330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:15:30.178 [2024-11-10 05:17:23.345340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:15:30.178 [2024-11-10 05:17:23.345347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:15:30.178 [2024-11-10 05:17:23.345356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:15:30.178 [2024-11-10 05:17:23.345363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:15:30.178 [2024-11-10 05:17:23.345372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:15:30.178 [2024-11-10 05:17:23.345380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:15:30.178 [2024-11-10 05:17:23.345401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:15:30.178 [2024-11-10 05:17:23.345408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:15:30.178 [2024-11-10 05:17:23.345417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:15:30.178 [2024-11-10 05:17:23.345424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:15:30.178 [2024-11-10 05:17:23.345433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:15:30.178 [2024-11-10 05:17:23.345440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:15:30.178 [2024-11-10 05:17:23.345449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:15:30.178 [2024-11-10 05:17:23.345456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:15:30.178 [2024-11-10 05:17:23.345467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:15:30.178 [2024-11-10 05:17:23.345475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:15:30.178 [2024-11-10 05:17:23.345493] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:15:30.178 [2024-11-10 05:17:23.345501] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 299db392-82e3-459b-b2d4-a430c3530dc3 00:15:30.178 [2024-11-10 05:17:23.345511] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:15:30.178 [2024-11-10 05:17:23.345518] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:15:30.178 [2024-11-10 05:17:23.345529] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:15:30.178 [2024-11-10 05:17:23.345537] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:15:30.178 [2024-11-10 05:17:23.345545] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:15:30.178 [2024-11-10 05:17:23.345552] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:15:30.178 [2024-11-10 05:17:23.345561] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:15:30.178 [2024-11-10 05:17:23.345567] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:15:30.178 [2024-11-10 05:17:23.345575] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:15:30.178 [2024-11-10 05:17:23.345582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:30.178 [2024-11-10 05:17:23.345591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:15:30.178 [2024-11-10 05:17:23.345599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.973 ms 00:15:30.178 [2024-11-10 05:17:23.345607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:30.178 [2024-11-10 05:17:23.347360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:30.178 [2024-11-10 05:17:23.347470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:15:30.178 [2024-11-10 05:17:23.347535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.732 ms 00:15:30.178 [2024-11-10 05:17:23.347574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:30.178 [2024-11-10 05:17:23.347835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:30.178 [2024-11-10 05:17:23.347923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:15:30.178 [2024-11-10 05:17:23.347984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:15:30.178 [2024-11-10 05:17:23.348028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:30.178 [2024-11-10 05:17:23.352976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:30.178 [2024-11-10 05:17:23.353114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:30.178 [2024-11-10 05:17:23.353185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:30.178 [2024-11-10 05:17:23.353215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:30.178 [2024-11-10 05:17:23.353314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:30.178 [2024-11-10 05:17:23.353398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:30.178 [2024-11-10 05:17:23.353465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:30.178 [2024-11-10 05:17:23.353494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:30.178 [2024-11-10 05:17:23.353602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:30.178 [2024-11-10 05:17:23.353684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:30.178 [2024-11-10 05:17:23.353742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:30.178 [2024-11-10 05:17:23.353769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:30.178 [2024-11-10 05:17:23.353843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:30.178 [2024-11-10 05:17:23.353910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:30.178 [2024-11-10 05:17:23.353970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:30.178 [2024-11-10 05:17:23.354054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:30.178 [2024-11-10 05:17:23.362962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:30.178 [2024-11-10 05:17:23.363214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:30.178 [2024-11-10 05:17:23.363286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:30.178 [2024-11-10 05:17:23.363328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:30.178 [2024-11-10 05:17:23.370555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:30.178 [2024-11-10 05:17:23.370595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:30.178 [2024-11-10 05:17:23.370606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:30.178 [2024-11-10 05:17:23.370615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:30.178 [2024-11-10 05:17:23.370686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:30.178 [2024-11-10 05:17:23.370700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:30.178 [2024-11-10 05:17:23.370720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:30.178 [2024-11-10 05:17:23.370730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:30.179 [2024-11-10 05:17:23.370778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:30.179 [2024-11-10 05:17:23.370789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:30.179 [2024-11-10 05:17:23.370797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:30.179 [2024-11-10 05:17:23.370805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:30.179 [2024-11-10 05:17:23.370879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:30.179 [2024-11-10 05:17:23.370890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:30.179 [2024-11-10 05:17:23.370898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:30.179 [2024-11-10 05:17:23.370908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:30.179 [2024-11-10 05:17:23.370947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:30.179 [2024-11-10 05:17:23.370958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:15:30.179 [2024-11-10 05:17:23.370965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:30.179 [2024-11-10 05:17:23.370982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:30.179 [2024-11-10 05:17:23.371240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:30.179 [2024-11-10 05:17:23.371319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:30.179 [2024-11-10 05:17:23.371380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:30.179 [2024-11-10 05:17:23.371414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:30.179 [2024-11-10 05:17:23.371558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:30.179 [2024-11-10 05:17:23.371642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:30.179 [2024-11-10 05:17:23.371706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:30.179 [2024-11-10 05:17:23.371789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:30.179 [2024-11-10 05:17:23.371970] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 47.722 ms, result 0 00:15:30.179 true 00:15:30.179 05:17:23 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 84233 00:15:30.179 05:17:23 ftl.ftl_fio_basic -- common/autotest_common.sh@950 -- # '[' -z 84233 ']' 00:15:30.179 05:17:23 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # kill -0 84233 00:15:30.179 05:17:23 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # uname 00:15:30.179 05:17:23 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:30.179 05:17:23 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84233 00:15:30.437 killing process with pid 84233 00:15:30.437 05:17:23 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:30.437 05:17:23 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:30.437 05:17:23 ftl.ftl_fio_basic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84233' 00:15:30.437 05:17:23 ftl.ftl_fio_basic -- common/autotest_common.sh@969 -- # kill 84233 00:15:30.437 05:17:23 ftl.ftl_fio_basic -- common/autotest_common.sh@974 -- # wait 84233 00:15:35.706 05:17:28 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:15:35.706 05:17:28 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:35.706 05:17:28 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:15:35.706 05:17:28 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:35.706 05:17:28 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:35.706 05:17:28 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:35.706 05:17:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:35.706 05:17:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:35.706 05:17:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:35.706 05:17:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:35.706 05:17:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:35.706 05:17:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:15:35.706 05:17:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:35.706 05:17:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:35.706 05:17:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:35.706 05:17:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:15:35.706 05:17:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:35.706 05:17:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:35.706 05:17:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:35.706 05:17:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:15:35.706 05:17:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:35.706 05:17:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:35.706 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:15:35.706 fio-3.35 00:15:35.706 Starting 1 thread 00:15:39.894 00:15:39.894 test: (groupid=0, jobs=1): err= 0: pid=84392: Sun Nov 10 05:17:32 2024 00:15:39.894 read: IOPS=1310, BW=87.0MiB/s (91.3MB/s)(255MiB/2924msec) 00:15:39.894 slat (nsec): min=2941, max=22040, avg=4266.69, stdev=1939.59 00:15:39.894 clat (usec): min=246, max=818, avg=342.91, stdev=58.32 00:15:39.894 lat (usec): min=250, max=822, avg=347.18, stdev=58.90 00:15:39.894 clat percentiles (usec): 00:15:39.894 | 1.00th=[ 289], 5.00th=[ 314], 10.00th=[ 318], 20.00th=[ 318], 00:15:39.894 | 30.00th=[ 322], 40.00th=[ 322], 50.00th=[ 322], 60.00th=[ 326], 00:15:39.894 | 70.00th=[ 330], 80.00th=[ 338], 90.00th=[ 420], 95.00th=[ 461], 00:15:39.894 | 99.00th=[ 611], 99.50th=[ 668], 99.90th=[ 791], 99.95th=[ 791], 00:15:39.894 | 99.99th=[ 816] 00:15:39.894 write: IOPS=1320, BW=87.7MiB/s (91.9MB/s)(256MiB/2921msec); 0 zone resets 00:15:39.894 slat (nsec): min=13423, max=89385, avg=19114.93, stdev=4304.16 00:15:39.894 clat (usec): min=292, max=1078, avg=380.45, stdev=80.05 00:15:39.894 lat (usec): min=310, max=1096, avg=399.57, stdev=80.81 00:15:39.894 clat percentiles (usec): 00:15:39.894 | 1.00th=[ 330], 5.00th=[ 338], 10.00th=[ 343], 20.00th=[ 343], 00:15:39.894 | 30.00th=[ 347], 40.00th=[ 347], 50.00th=[ 351], 60.00th=[ 355], 00:15:39.894 | 70.00th=[ 363], 80.00th=[ 412], 90.00th=[ 433], 95.00th=[ 529], 00:15:39.894 | 99.00th=[ 775], 99.50th=[ 832], 99.90th=[ 930], 99.95th=[ 988], 00:15:39.894 | 99.99th=[ 1074] 00:15:39.894 bw ( KiB/s): min=86768, max=92888, per=99.69%, avg=89488.00, stdev=2531.57, samples=5 00:15:39.894 iops : min= 1276, max= 1366, avg=1316.00, stdev=37.23, samples=5 00:15:39.894 lat (usec) : 250=0.01%, 500=95.66%, 750=3.64%, 1000=0.68% 00:15:39.894 lat (msec) : 2=0.01% 00:15:39.894 cpu : usr=99.25%, sys=0.10%, ctx=7, majf=0, minf=1181 00:15:39.894 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:39.894 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:39.894 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:39.894 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:39.894 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:39.894 00:15:39.894 Run status group 0 (all jobs): 00:15:39.894 READ: bw=87.0MiB/s (91.3MB/s), 87.0MiB/s-87.0MiB/s (91.3MB/s-91.3MB/s), io=255MiB (267MB), run=2924-2924msec 00:15:39.894 WRITE: bw=87.7MiB/s (91.9MB/s), 87.7MiB/s-87.7MiB/s (91.9MB/s-91.9MB/s), io=256MiB (269MB), run=2921-2921msec 00:15:39.894 ----------------------------------------------------- 00:15:39.894 Suppressions used: 00:15:39.894 count bytes template 00:15:39.894 1 5 /usr/src/fio/parse.c 00:15:39.894 1 8 libtcmalloc_minimal.so 00:15:39.894 1 904 libcrypto.so 00:15:39.894 ----------------------------------------------------- 00:15:39.894 00:15:39.894 05:17:32 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:15:39.894 05:17:32 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:39.894 05:17:32 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:39.894 05:17:32 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:39.894 05:17:32 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:15:39.894 05:17:32 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:39.894 05:17:32 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:39.894 05:17:32 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:39.894 05:17:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:39.894 05:17:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:39.894 05:17:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:39.894 05:17:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:39.894 05:17:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:39.894 05:17:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:15:39.894 05:17:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:39.894 05:17:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:39.894 05:17:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:39.894 05:17:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:15:39.894 05:17:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:39.894 05:17:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:39.895 05:17:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:39.895 05:17:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:15:39.895 05:17:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:39.895 05:17:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:40.153 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:40.153 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:40.153 fio-3.35 00:15:40.153 Starting 2 threads 00:16:06.698 00:16:06.698 first_half: (groupid=0, jobs=1): err= 0: pid=84467: Sun Nov 10 05:17:55 2024 00:16:06.698 read: IOPS=3045, BW=11.9MiB/s (12.5MB/s)(255MiB/21420msec) 00:16:06.698 slat (nsec): min=3023, max=23545, avg=3729.73, stdev=642.02 00:16:06.698 clat (usec): min=606, max=268156, avg=33479.11, stdev=16118.66 00:16:06.698 lat (usec): min=610, max=268159, avg=33482.84, stdev=16118.68 00:16:06.698 clat percentiles (msec): 00:16:06.698 | 1.00th=[ 7], 5.00th=[ 29], 10.00th=[ 29], 20.00th=[ 29], 00:16:06.698 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 30], 60.00th=[ 31], 00:16:06.698 | 70.00th=[ 31], 80.00th=[ 34], 90.00th=[ 39], 95.00th=[ 48], 00:16:06.698 | 99.00th=[ 124], 99.50th=[ 140], 99.90th=[ 163], 99.95th=[ 203], 00:16:06.698 | 99.99th=[ 262] 00:16:06.698 write: IOPS=3985, BW=15.6MiB/s (16.3MB/s)(256MiB/16444msec); 0 zone resets 00:16:06.698 slat (usec): min=3, max=185, avg= 5.27, stdev= 2.55 00:16:06.698 clat (usec): min=361, max=74304, avg=8479.16, stdev=14308.87 00:16:06.698 lat (usec): min=368, max=74309, avg=8484.43, stdev=14308.90 00:16:06.698 clat percentiles (usec): 00:16:06.698 | 1.00th=[ 635], 5.00th=[ 717], 10.00th=[ 799], 20.00th=[ 1029], 00:16:06.698 | 30.00th=[ 1598], 40.00th=[ 3294], 50.00th=[ 4424], 60.00th=[ 5145], 00:16:06.698 | 70.00th=[ 5997], 80.00th=[ 9372], 90.00th=[13304], 95.00th=[55313], 00:16:06.698 | 99.00th=[64750], 99.50th=[68682], 99.90th=[72877], 99.95th=[73925], 00:16:06.698 | 99.99th=[73925] 00:16:06.698 bw ( KiB/s): min= 864, max=42968, per=89.17%, avg=27592.58, stdev=16123.38, samples=19 00:16:06.698 iops : min= 216, max=10742, avg=6898.11, stdev=4030.90, samples=19 00:16:06.698 lat (usec) : 500=0.03%, 750=3.59%, 1000=5.88% 00:16:06.698 lat (msec) : 2=6.78%, 4=6.91%, 10=19.15%, 20=4.12%, 50=48.09% 00:16:06.698 lat (msec) : 100=4.57%, 250=0.86%, 500=0.01% 00:16:06.698 cpu : usr=99.30%, sys=0.10%, ctx=49, majf=0, minf=5617 00:16:06.698 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:16:06.698 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:06.698 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:06.698 issued rwts: total=65241,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:06.698 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:06.698 second_half: (groupid=0, jobs=1): err= 0: pid=84468: Sun Nov 10 05:17:55 2024 00:16:06.698 read: IOPS=3034, BW=11.9MiB/s (12.4MB/s)(255MiB/21517msec) 00:16:06.698 slat (nsec): min=2963, max=32552, avg=3705.26, stdev=615.06 00:16:06.698 clat (usec): min=641, max=276153, avg=32988.91, stdev=16786.22 00:16:06.698 lat (usec): min=644, max=276165, avg=32992.62, stdev=16786.25 00:16:06.698 clat percentiles (msec): 00:16:06.698 | 1.00th=[ 8], 5.00th=[ 27], 10.00th=[ 29], 20.00th=[ 29], 00:16:06.698 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 30], 60.00th=[ 31], 00:16:06.698 | 70.00th=[ 31], 80.00th=[ 34], 90.00th=[ 37], 95.00th=[ 43], 00:16:06.698 | 99.00th=[ 130], 99.50th=[ 146], 99.90th=[ 178], 99.95th=[ 199], 00:16:06.698 | 99.99th=[ 271] 00:16:06.698 write: IOPS=3868, BW=15.1MiB/s (15.8MB/s)(256MiB/16943msec); 0 zone resets 00:16:06.698 slat (usec): min=3, max=832, avg= 5.30, stdev= 3.95 00:16:06.698 clat (usec): min=369, max=74976, avg=9143.93, stdev=14946.69 00:16:06.698 lat (usec): min=376, max=74980, avg=9149.22, stdev=14946.75 00:16:06.698 clat percentiles (usec): 00:16:06.698 | 1.00th=[ 644], 5.00th=[ 766], 10.00th=[ 938], 20.00th=[ 1237], 00:16:06.698 | 30.00th=[ 2507], 40.00th=[ 3523], 50.00th=[ 4490], 60.00th=[ 5080], 00:16:06.698 | 70.00th=[ 5866], 80.00th=[ 9372], 90.00th=[26870], 95.00th=[56886], 00:16:06.698 | 99.00th=[64750], 99.50th=[69731], 99.90th=[72877], 99.95th=[73925], 00:16:06.698 | 99.99th=[73925] 00:16:06.698 bw ( KiB/s): min= 1736, max=54720, per=89.17%, avg=27594.11, stdev=16848.72, samples=19 00:16:06.698 iops : min= 434, max=13680, avg=6898.53, stdev=4212.18, samples=19 00:16:06.698 lat (usec) : 500=0.04%, 750=2.13%, 1000=3.89% 00:16:06.698 lat (msec) : 2=7.85%, 4=8.53%, 10=19.88%, 20=3.81%, 50=48.73% 00:16:06.698 lat (msec) : 100=4.16%, 250=0.97%, 500=0.01% 00:16:06.698 cpu : usr=99.48%, sys=0.08%, ctx=31, majf=0, minf=5527 00:16:06.698 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:16:06.698 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:06.698 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:06.698 issued rwts: total=65286,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:06.698 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:06.698 00:16:06.698 Run status group 0 (all jobs): 00:16:06.698 READ: bw=23.7MiB/s (24.8MB/s), 11.9MiB/s-11.9MiB/s (12.4MB/s-12.5MB/s), io=510MiB (535MB), run=21420-21517msec 00:16:06.698 WRITE: bw=30.2MiB/s (31.7MB/s), 15.1MiB/s-15.6MiB/s (15.8MB/s-16.3MB/s), io=512MiB (537MB), run=16444-16943msec 00:16:06.698 ----------------------------------------------------- 00:16:06.698 Suppressions used: 00:16:06.698 count bytes template 00:16:06.698 2 10 /usr/src/fio/parse.c 00:16:06.698 2 192 /usr/src/fio/iolog.c 00:16:06.698 1 8 libtcmalloc_minimal.so 00:16:06.698 1 904 libcrypto.so 00:16:06.698 ----------------------------------------------------- 00:16:06.698 00:16:06.698 05:17:56 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:16:06.698 05:17:56 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:06.698 05:17:56 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:06.698 05:17:56 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:16:06.698 05:17:56 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:16:06.698 05:17:56 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:06.698 05:17:56 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:06.698 05:17:56 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:06.698 05:17:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:06.698 05:17:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:16:06.698 05:17:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:06.698 05:17:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:16:06.698 05:17:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:06.698 05:17:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:16:06.698 05:17:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:16:06.698 05:17:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:16:06.698 05:17:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:06.698 05:17:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:16:06.698 05:17:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:16:06.698 05:17:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:06.698 05:17:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:06.698 05:17:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:16:06.698 05:17:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:06.698 05:17:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:06.698 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:16:06.698 fio-3.35 00:16:06.698 Starting 1 thread 00:16:18.919 00:16:18.919 test: (groupid=0, jobs=1): err= 0: pid=84747: Sun Nov 10 05:18:10 2024 00:16:18.919 read: IOPS=8284, BW=32.4MiB/s (33.9MB/s)(255MiB/7870msec) 00:16:18.919 slat (nsec): min=2990, max=18534, avg=3480.62, stdev=744.76 00:16:18.919 clat (usec): min=592, max=30660, avg=15441.72, stdev=1587.96 00:16:18.919 lat (usec): min=596, max=30664, avg=15445.20, stdev=1587.98 00:16:18.919 clat percentiles (usec): 00:16:18.919 | 1.00th=[14222], 5.00th=[14484], 10.00th=[14615], 20.00th=[14746], 00:16:18.919 | 30.00th=[14877], 40.00th=[15008], 50.00th=[15139], 60.00th=[15270], 00:16:18.919 | 70.00th=[15401], 80.00th=[15533], 90.00th=[15795], 95.00th=[19006], 00:16:18.919 | 99.00th=[22938], 99.50th=[24249], 99.90th=[25560], 99.95th=[26870], 00:16:18.919 | 99.99th=[30016] 00:16:18.919 write: IOPS=12.8k, BW=49.8MiB/s (52.2MB/s)(256MiB/5138msec); 0 zone resets 00:16:18.919 slat (usec): min=4, max=225, avg= 6.16, stdev= 2.49 00:16:18.919 clat (usec): min=423, max=56010, avg=9974.65, stdev=11558.84 00:16:18.919 lat (usec): min=428, max=56015, avg=9980.81, stdev=11558.84 00:16:18.919 clat percentiles (usec): 00:16:18.919 | 1.00th=[ 562], 5.00th=[ 734], 10.00th=[ 824], 20.00th=[ 996], 00:16:18.919 | 30.00th=[ 1467], 40.00th=[ 2474], 50.00th=[ 6390], 60.00th=[ 9765], 00:16:18.919 | 70.00th=[11731], 80.00th=[14091], 90.00th=[25560], 95.00th=[38536], 00:16:18.919 | 99.00th=[47973], 99.50th=[49546], 99.90th=[53216], 99.95th=[53740], 00:16:18.919 | 99.99th=[54789] 00:16:18.919 bw ( KiB/s): min=18048, max=77512, per=93.42%, avg=47662.55, stdev=15339.55, samples=11 00:16:18.919 iops : min= 4512, max=19378, avg=11915.64, stdev=3834.89, samples=11 00:16:18.919 lat (usec) : 500=0.10%, 750=2.85%, 1000=7.11% 00:16:18.919 lat (msec) : 2=8.85%, 4=2.89%, 10=8.86%, 20=59.76%, 50=9.36% 00:16:18.919 lat (msec) : 100=0.23% 00:16:18.919 cpu : usr=99.13%, sys=0.19%, ctx=27, majf=0, minf=5577 00:16:18.919 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:16:18.919 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:18.919 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:18.919 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:18.919 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:18.919 00:16:18.919 Run status group 0 (all jobs): 00:16:18.919 READ: bw=32.4MiB/s (33.9MB/s), 32.4MiB/s-32.4MiB/s (33.9MB/s-33.9MB/s), io=255MiB (267MB), run=7870-7870msec 00:16:18.919 WRITE: bw=49.8MiB/s (52.2MB/s), 49.8MiB/s-49.8MiB/s (52.2MB/s-52.2MB/s), io=256MiB (268MB), run=5138-5138msec 00:16:18.919 ----------------------------------------------------- 00:16:18.919 Suppressions used: 00:16:18.919 count bytes template 00:16:18.919 1 5 /usr/src/fio/parse.c 00:16:18.919 2 192 /usr/src/fio/iolog.c 00:16:18.919 1 8 libtcmalloc_minimal.so 00:16:18.919 1 904 libcrypto.so 00:16:18.919 ----------------------------------------------------- 00:16:18.919 00:16:18.919 05:18:11 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:16:18.919 05:18:11 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:18.919 05:18:11 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:18.919 05:18:11 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:18.919 Remove shared memory files 00:16:18.919 05:18:11 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:16:18.919 05:18:11 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:16:18.919 05:18:11 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:16:18.919 05:18:11 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:16:18.919 05:18:11 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid69799 /dev/shm/spdk_tgt_trace.pid83180 00:16:18.919 05:18:11 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:16:18.919 05:18:11 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:16:18.919 ************************************ 00:16:18.919 END TEST ftl_fio_basic 00:16:18.919 ************************************ 00:16:18.919 00:16:18.919 real 0m54.755s 00:16:18.919 user 2m4.018s 00:16:18.919 sys 0m2.338s 00:16:18.919 05:18:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:18.919 05:18:11 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:18.919 05:18:11 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:16:18.919 05:18:11 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:18.919 05:18:11 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:18.919 05:18:11 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:18.919 ************************************ 00:16:18.919 START TEST ftl_bdevperf 00:16:18.919 ************************************ 00:16:18.919 05:18:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:16:18.919 * Looking for test storage... 00:16:18.919 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:18.919 05:18:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:18.919 05:18:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # lcov --version 00:16:18.919 05:18:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:18.919 05:18:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:18.919 05:18:11 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:18.919 05:18:11 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:18.919 05:18:11 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:18.919 05:18:11 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:16:18.919 05:18:11 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:16:18.919 05:18:11 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:16:18.920 05:18:11 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:16:18.920 05:18:11 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:16:18.920 05:18:11 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:16:18.920 05:18:11 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:16:18.920 05:18:11 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:18.920 05:18:11 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:16:18.920 05:18:11 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:16:18.920 05:18:11 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:18.920 05:18:11 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:18.920 05:18:11 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:16:18.920 05:18:11 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:16:18.920 05:18:11 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:18.920 05:18:11 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:16:18.920 05:18:11 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:16:18.920 05:18:11 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:16:18.920 05:18:11 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:16:18.920 05:18:11 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:18.920 05:18:11 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:16:18.920 05:18:11 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:16:18.920 05:18:11 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:18.920 05:18:11 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:18.920 05:18:11 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:16:18.920 05:18:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:18.920 05:18:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:18.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:18.920 --rc genhtml_branch_coverage=1 00:16:18.920 --rc genhtml_function_coverage=1 00:16:18.920 --rc genhtml_legend=1 00:16:18.920 --rc geninfo_all_blocks=1 00:16:18.920 --rc geninfo_unexecuted_blocks=1 00:16:18.920 00:16:18.920 ' 00:16:18.920 05:18:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:18.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:18.920 --rc genhtml_branch_coverage=1 00:16:18.920 --rc genhtml_function_coverage=1 00:16:18.920 --rc genhtml_legend=1 00:16:18.920 --rc geninfo_all_blocks=1 00:16:18.920 --rc geninfo_unexecuted_blocks=1 00:16:18.920 00:16:18.920 ' 00:16:18.920 05:18:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:18.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:18.920 --rc genhtml_branch_coverage=1 00:16:18.920 --rc genhtml_function_coverage=1 00:16:18.920 --rc genhtml_legend=1 00:16:18.920 --rc geninfo_all_blocks=1 00:16:18.920 --rc geninfo_unexecuted_blocks=1 00:16:18.920 00:16:18.920 ' 00:16:18.920 05:18:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:18.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:18.920 --rc genhtml_branch_coverage=1 00:16:18.920 --rc genhtml_function_coverage=1 00:16:18.920 --rc genhtml_legend=1 00:16:18.920 --rc geninfo_all_blocks=1 00:16:18.920 --rc geninfo_unexecuted_blocks=1 00:16:18.920 00:16:18.920 ' 00:16:18.920 05:18:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:18.920 05:18:11 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:16:18.920 05:18:11 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:18.920 05:18:11 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:18.920 05:18:11 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:18.920 05:18:11 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:18.920 05:18:11 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:18.920 05:18:11 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:18.920 05:18:11 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:18.920 05:18:11 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:18.920 05:18:11 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:18.920 05:18:11 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:18.920 05:18:11 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:18.920 05:18:11 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:18.920 05:18:11 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:18.920 05:18:11 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:18.920 05:18:11 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:18.920 05:18:11 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:18.920 05:18:11 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:18.920 05:18:11 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:18.920 05:18:11 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:18.920 05:18:11 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:18.920 05:18:11 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:18.920 05:18:11 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:18.920 05:18:11 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:18.920 05:18:11 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:18.920 05:18:11 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:18.920 05:18:11 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:18.920 05:18:11 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:18.920 05:18:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:16:18.920 05:18:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:16:18.920 05:18:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:16:18.920 05:18:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:18.920 05:18:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:16:18.920 05:18:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=84975 00:16:18.920 05:18:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:16:18.920 05:18:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 84975 00:16:18.920 05:18:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:16:18.920 05:18:11 ftl.ftl_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 84975 ']' 00:16:18.920 05:18:11 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:18.920 05:18:11 ftl.ftl_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:18.920 05:18:11 ftl.ftl_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:18.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:18.920 05:18:11 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:18.920 05:18:11 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:18.920 [2024-11-10 05:18:11.383941] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:18.920 [2024-11-10 05:18:11.384200] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84975 ] 00:16:18.920 [2024-11-10 05:18:11.518458] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:18.920 [2024-11-10 05:18:11.549800] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:19.179 05:18:12 ftl.ftl_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:19.179 05:18:12 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:16:19.179 05:18:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:19.179 05:18:12 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:16:19.179 05:18:12 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:19.179 05:18:12 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:16:19.179 05:18:12 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:16:19.179 05:18:12 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:19.438 05:18:12 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:19.438 05:18:12 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:16:19.438 05:18:12 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:19.438 05:18:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:16:19.438 05:18:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:19.438 05:18:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:16:19.438 05:18:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:16:19.438 05:18:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:19.696 05:18:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:19.696 { 00:16:19.696 "name": "nvme0n1", 00:16:19.696 "aliases": [ 00:16:19.696 "4dbf4f64-57fc-4d21-abd0-27e1d3821b46" 00:16:19.696 ], 00:16:19.696 "product_name": "NVMe disk", 00:16:19.696 "block_size": 4096, 00:16:19.696 "num_blocks": 1310720, 00:16:19.696 "uuid": "4dbf4f64-57fc-4d21-abd0-27e1d3821b46", 00:16:19.696 "numa_id": -1, 00:16:19.696 "assigned_rate_limits": { 00:16:19.696 "rw_ios_per_sec": 0, 00:16:19.696 "rw_mbytes_per_sec": 0, 00:16:19.696 "r_mbytes_per_sec": 0, 00:16:19.696 "w_mbytes_per_sec": 0 00:16:19.696 }, 00:16:19.696 "claimed": true, 00:16:19.696 "claim_type": "read_many_write_one", 00:16:19.696 "zoned": false, 00:16:19.696 "supported_io_types": { 00:16:19.696 "read": true, 00:16:19.696 "write": true, 00:16:19.696 "unmap": true, 00:16:19.696 "flush": true, 00:16:19.696 "reset": true, 00:16:19.696 "nvme_admin": true, 00:16:19.696 "nvme_io": true, 00:16:19.696 "nvme_io_md": false, 00:16:19.696 "write_zeroes": true, 00:16:19.696 "zcopy": false, 00:16:19.696 "get_zone_info": false, 00:16:19.696 "zone_management": false, 00:16:19.696 "zone_append": false, 00:16:19.696 "compare": true, 00:16:19.696 "compare_and_write": false, 00:16:19.696 "abort": true, 00:16:19.696 "seek_hole": false, 00:16:19.696 "seek_data": false, 00:16:19.696 "copy": true, 00:16:19.696 "nvme_iov_md": false 00:16:19.696 }, 00:16:19.696 "driver_specific": { 00:16:19.697 "nvme": [ 00:16:19.697 { 00:16:19.697 "pci_address": "0000:00:11.0", 00:16:19.697 "trid": { 00:16:19.697 "trtype": "PCIe", 00:16:19.697 "traddr": "0000:00:11.0" 00:16:19.697 }, 00:16:19.697 "ctrlr_data": { 00:16:19.697 "cntlid": 0, 00:16:19.697 "vendor_id": "0x1b36", 00:16:19.697 "model_number": "QEMU NVMe Ctrl", 00:16:19.697 "serial_number": "12341", 00:16:19.697 "firmware_revision": "8.0.0", 00:16:19.697 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:19.697 "oacs": { 00:16:19.697 "security": 0, 00:16:19.697 "format": 1, 00:16:19.697 "firmware": 0, 00:16:19.697 "ns_manage": 1 00:16:19.697 }, 00:16:19.697 "multi_ctrlr": false, 00:16:19.697 "ana_reporting": false 00:16:19.697 }, 00:16:19.697 "vs": { 00:16:19.697 "nvme_version": "1.4" 00:16:19.697 }, 00:16:19.697 "ns_data": { 00:16:19.697 "id": 1, 00:16:19.697 "can_share": false 00:16:19.697 } 00:16:19.697 } 00:16:19.697 ], 00:16:19.697 "mp_policy": "active_passive" 00:16:19.697 } 00:16:19.697 } 00:16:19.697 ]' 00:16:19.697 05:18:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:19.697 05:18:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:16:19.697 05:18:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:19.697 05:18:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=1310720 00:16:19.697 05:18:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:16:19.697 05:18:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 5120 00:16:19.697 05:18:12 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:16:19.697 05:18:12 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:19.697 05:18:12 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:16:19.697 05:18:12 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:19.697 05:18:12 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:19.955 05:18:12 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=180524a7-d395-4c74-8a2e-24884e7367a3 00:16:19.955 05:18:12 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:16:19.955 05:18:12 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 180524a7-d395-4c74-8a2e-24884e7367a3 00:16:19.955 05:18:13 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:20.214 05:18:13 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=0c0caeb0-461b-4a19-a99f-cd34934825c8 00:16:20.214 05:18:13 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 0c0caeb0-461b-4a19-a99f-cd34934825c8 00:16:20.472 05:18:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=84a81fa5-cbb2-4b1d-b10f-6c379b9699bc 00:16:20.472 05:18:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 84a81fa5-cbb2-4b1d-b10f-6c379b9699bc 00:16:20.472 05:18:13 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:16:20.472 05:18:13 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:20.472 05:18:13 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=84a81fa5-cbb2-4b1d-b10f-6c379b9699bc 00:16:20.472 05:18:13 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:16:20.472 05:18:13 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 84a81fa5-cbb2-4b1d-b10f-6c379b9699bc 00:16:20.472 05:18:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=84a81fa5-cbb2-4b1d-b10f-6c379b9699bc 00:16:20.472 05:18:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:20.472 05:18:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:16:20.472 05:18:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:16:20.472 05:18:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 84a81fa5-cbb2-4b1d-b10f-6c379b9699bc 00:16:20.731 05:18:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:20.731 { 00:16:20.731 "name": "84a81fa5-cbb2-4b1d-b10f-6c379b9699bc", 00:16:20.731 "aliases": [ 00:16:20.731 "lvs/nvme0n1p0" 00:16:20.731 ], 00:16:20.731 "product_name": "Logical Volume", 00:16:20.731 "block_size": 4096, 00:16:20.731 "num_blocks": 26476544, 00:16:20.731 "uuid": "84a81fa5-cbb2-4b1d-b10f-6c379b9699bc", 00:16:20.731 "assigned_rate_limits": { 00:16:20.731 "rw_ios_per_sec": 0, 00:16:20.731 "rw_mbytes_per_sec": 0, 00:16:20.731 "r_mbytes_per_sec": 0, 00:16:20.731 "w_mbytes_per_sec": 0 00:16:20.731 }, 00:16:20.731 "claimed": false, 00:16:20.731 "zoned": false, 00:16:20.731 "supported_io_types": { 00:16:20.731 "read": true, 00:16:20.731 "write": true, 00:16:20.731 "unmap": true, 00:16:20.731 "flush": false, 00:16:20.731 "reset": true, 00:16:20.731 "nvme_admin": false, 00:16:20.731 "nvme_io": false, 00:16:20.731 "nvme_io_md": false, 00:16:20.731 "write_zeroes": true, 00:16:20.731 "zcopy": false, 00:16:20.731 "get_zone_info": false, 00:16:20.731 "zone_management": false, 00:16:20.731 "zone_append": false, 00:16:20.731 "compare": false, 00:16:20.731 "compare_and_write": false, 00:16:20.731 "abort": false, 00:16:20.731 "seek_hole": true, 00:16:20.731 "seek_data": true, 00:16:20.731 "copy": false, 00:16:20.731 "nvme_iov_md": false 00:16:20.731 }, 00:16:20.731 "driver_specific": { 00:16:20.731 "lvol": { 00:16:20.731 "lvol_store_uuid": "0c0caeb0-461b-4a19-a99f-cd34934825c8", 00:16:20.731 "base_bdev": "nvme0n1", 00:16:20.731 "thin_provision": true, 00:16:20.731 "num_allocated_clusters": 0, 00:16:20.731 "snapshot": false, 00:16:20.731 "clone": false, 00:16:20.731 "esnap_clone": false 00:16:20.731 } 00:16:20.731 } 00:16:20.731 } 00:16:20.731 ]' 00:16:20.731 05:18:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:20.731 05:18:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:16:20.731 05:18:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:20.731 05:18:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:20.731 05:18:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:20.731 05:18:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:16:20.731 05:18:13 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:16:20.731 05:18:13 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:16:20.731 05:18:13 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:20.991 05:18:14 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:20.991 05:18:14 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:20.991 05:18:14 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 84a81fa5-cbb2-4b1d-b10f-6c379b9699bc 00:16:20.991 05:18:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=84a81fa5-cbb2-4b1d-b10f-6c379b9699bc 00:16:20.991 05:18:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:20.991 05:18:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:16:20.991 05:18:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:16:20.991 05:18:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 84a81fa5-cbb2-4b1d-b10f-6c379b9699bc 00:16:21.255 05:18:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:21.255 { 00:16:21.255 "name": "84a81fa5-cbb2-4b1d-b10f-6c379b9699bc", 00:16:21.255 "aliases": [ 00:16:21.255 "lvs/nvme0n1p0" 00:16:21.255 ], 00:16:21.255 "product_name": "Logical Volume", 00:16:21.255 "block_size": 4096, 00:16:21.255 "num_blocks": 26476544, 00:16:21.255 "uuid": "84a81fa5-cbb2-4b1d-b10f-6c379b9699bc", 00:16:21.255 "assigned_rate_limits": { 00:16:21.255 "rw_ios_per_sec": 0, 00:16:21.255 "rw_mbytes_per_sec": 0, 00:16:21.255 "r_mbytes_per_sec": 0, 00:16:21.255 "w_mbytes_per_sec": 0 00:16:21.255 }, 00:16:21.255 "claimed": false, 00:16:21.255 "zoned": false, 00:16:21.255 "supported_io_types": { 00:16:21.255 "read": true, 00:16:21.255 "write": true, 00:16:21.255 "unmap": true, 00:16:21.255 "flush": false, 00:16:21.255 "reset": true, 00:16:21.255 "nvme_admin": false, 00:16:21.255 "nvme_io": false, 00:16:21.255 "nvme_io_md": false, 00:16:21.255 "write_zeroes": true, 00:16:21.255 "zcopy": false, 00:16:21.255 "get_zone_info": false, 00:16:21.255 "zone_management": false, 00:16:21.255 "zone_append": false, 00:16:21.255 "compare": false, 00:16:21.255 "compare_and_write": false, 00:16:21.255 "abort": false, 00:16:21.255 "seek_hole": true, 00:16:21.255 "seek_data": true, 00:16:21.255 "copy": false, 00:16:21.255 "nvme_iov_md": false 00:16:21.255 }, 00:16:21.255 "driver_specific": { 00:16:21.255 "lvol": { 00:16:21.255 "lvol_store_uuid": "0c0caeb0-461b-4a19-a99f-cd34934825c8", 00:16:21.255 "base_bdev": "nvme0n1", 00:16:21.255 "thin_provision": true, 00:16:21.255 "num_allocated_clusters": 0, 00:16:21.255 "snapshot": false, 00:16:21.255 "clone": false, 00:16:21.255 "esnap_clone": false 00:16:21.255 } 00:16:21.255 } 00:16:21.255 } 00:16:21.255 ]' 00:16:21.255 05:18:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:21.255 05:18:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:16:21.255 05:18:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:21.255 05:18:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:21.255 05:18:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:21.255 05:18:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:16:21.255 05:18:14 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:16:21.255 05:18:14 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:21.514 05:18:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:16:21.514 05:18:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 84a81fa5-cbb2-4b1d-b10f-6c379b9699bc 00:16:21.514 05:18:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=84a81fa5-cbb2-4b1d-b10f-6c379b9699bc 00:16:21.514 05:18:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:21.514 05:18:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:16:21.514 05:18:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:16:21.514 05:18:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 84a81fa5-cbb2-4b1d-b10f-6c379b9699bc 00:16:21.514 05:18:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:21.514 { 00:16:21.514 "name": "84a81fa5-cbb2-4b1d-b10f-6c379b9699bc", 00:16:21.514 "aliases": [ 00:16:21.514 "lvs/nvme0n1p0" 00:16:21.514 ], 00:16:21.514 "product_name": "Logical Volume", 00:16:21.514 "block_size": 4096, 00:16:21.514 "num_blocks": 26476544, 00:16:21.514 "uuid": "84a81fa5-cbb2-4b1d-b10f-6c379b9699bc", 00:16:21.514 "assigned_rate_limits": { 00:16:21.514 "rw_ios_per_sec": 0, 00:16:21.514 "rw_mbytes_per_sec": 0, 00:16:21.514 "r_mbytes_per_sec": 0, 00:16:21.514 "w_mbytes_per_sec": 0 00:16:21.514 }, 00:16:21.514 "claimed": false, 00:16:21.514 "zoned": false, 00:16:21.514 "supported_io_types": { 00:16:21.514 "read": true, 00:16:21.514 "write": true, 00:16:21.514 "unmap": true, 00:16:21.514 "flush": false, 00:16:21.514 "reset": true, 00:16:21.514 "nvme_admin": false, 00:16:21.514 "nvme_io": false, 00:16:21.514 "nvme_io_md": false, 00:16:21.514 "write_zeroes": true, 00:16:21.514 "zcopy": false, 00:16:21.514 "get_zone_info": false, 00:16:21.514 "zone_management": false, 00:16:21.514 "zone_append": false, 00:16:21.514 "compare": false, 00:16:21.514 "compare_and_write": false, 00:16:21.514 "abort": false, 00:16:21.514 "seek_hole": true, 00:16:21.514 "seek_data": true, 00:16:21.514 "copy": false, 00:16:21.514 "nvme_iov_md": false 00:16:21.514 }, 00:16:21.514 "driver_specific": { 00:16:21.514 "lvol": { 00:16:21.514 "lvol_store_uuid": "0c0caeb0-461b-4a19-a99f-cd34934825c8", 00:16:21.514 "base_bdev": "nvme0n1", 00:16:21.514 "thin_provision": true, 00:16:21.514 "num_allocated_clusters": 0, 00:16:21.514 "snapshot": false, 00:16:21.514 "clone": false, 00:16:21.514 "esnap_clone": false 00:16:21.514 } 00:16:21.514 } 00:16:21.514 } 00:16:21.514 ]' 00:16:21.514 05:18:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:21.514 05:18:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:16:21.774 05:18:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:21.774 05:18:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:21.774 05:18:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:21.774 05:18:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:16:21.774 05:18:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:16:21.774 05:18:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 84a81fa5-cbb2-4b1d-b10f-6c379b9699bc -c nvc0n1p0 --l2p_dram_limit 20 00:16:21.774 [2024-11-10 05:18:14.914024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.774 [2024-11-10 05:18:14.914068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:21.774 [2024-11-10 05:18:14.914080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:21.774 [2024-11-10 05:18:14.914088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.774 [2024-11-10 05:18:14.914136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.774 [2024-11-10 05:18:14.914144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:21.774 [2024-11-10 05:18:14.914154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:16:21.774 [2024-11-10 05:18:14.914161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.774 [2024-11-10 05:18:14.914176] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:21.774 [2024-11-10 05:18:14.914379] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:21.775 [2024-11-10 05:18:14.914391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.775 [2024-11-10 05:18:14.914398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:21.775 [2024-11-10 05:18:14.914405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.219 ms 00:16:21.775 [2024-11-10 05:18:14.914411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.775 [2024-11-10 05:18:14.914437] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 1c286cba-f8a3-4a33-9dd4-cc54c9e750a4 00:16:21.775 [2024-11-10 05:18:14.915403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.775 [2024-11-10 05:18:14.915425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:21.775 [2024-11-10 05:18:14.915433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:16:21.775 [2024-11-10 05:18:14.915441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.775 [2024-11-10 05:18:14.920115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.775 [2024-11-10 05:18:14.920141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:21.775 [2024-11-10 05:18:14.920149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.605 ms 00:16:21.775 [2024-11-10 05:18:14.920158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.775 [2024-11-10 05:18:14.920213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.775 [2024-11-10 05:18:14.920221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:21.775 [2024-11-10 05:18:14.920227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:16:21.775 [2024-11-10 05:18:14.920234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.775 [2024-11-10 05:18:14.920267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.775 [2024-11-10 05:18:14.920277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:21.775 [2024-11-10 05:18:14.920283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:21.775 [2024-11-10 05:18:14.920290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.775 [2024-11-10 05:18:14.920311] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:21.775 [2024-11-10 05:18:14.921566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.775 [2024-11-10 05:18:14.921590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:21.775 [2024-11-10 05:18:14.921599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.259 ms 00:16:21.775 [2024-11-10 05:18:14.921605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.775 [2024-11-10 05:18:14.921634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.775 [2024-11-10 05:18:14.921640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:21.775 [2024-11-10 05:18:14.921649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:21.775 [2024-11-10 05:18:14.921654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.775 [2024-11-10 05:18:14.921666] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:21.775 [2024-11-10 05:18:14.921771] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:21.775 [2024-11-10 05:18:14.921782] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:21.775 [2024-11-10 05:18:14.921790] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:21.775 [2024-11-10 05:18:14.921799] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:21.775 [2024-11-10 05:18:14.921806] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:21.775 [2024-11-10 05:18:14.921814] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:16:21.775 [2024-11-10 05:18:14.921820] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:21.775 [2024-11-10 05:18:14.921828] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:21.775 [2024-11-10 05:18:14.921834] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:21.775 [2024-11-10 05:18:14.921843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.775 [2024-11-10 05:18:14.921849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:21.775 [2024-11-10 05:18:14.921857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.178 ms 00:16:21.775 [2024-11-10 05:18:14.921862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.775 [2024-11-10 05:18:14.921927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.775 [2024-11-10 05:18:14.921933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:21.775 [2024-11-10 05:18:14.921940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:16:21.775 [2024-11-10 05:18:14.921945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.775 [2024-11-10 05:18:14.922023] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:21.775 [2024-11-10 05:18:14.922035] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:21.775 [2024-11-10 05:18:14.922043] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:21.775 [2024-11-10 05:18:14.922050] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:21.775 [2024-11-10 05:18:14.922057] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:21.775 [2024-11-10 05:18:14.922063] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:21.775 [2024-11-10 05:18:14.922069] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:16:21.775 [2024-11-10 05:18:14.922074] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:21.775 [2024-11-10 05:18:14.922081] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:16:21.775 [2024-11-10 05:18:14.922086] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:21.775 [2024-11-10 05:18:14.922093] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:21.775 [2024-11-10 05:18:14.922099] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:16:21.775 [2024-11-10 05:18:14.922112] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:21.775 [2024-11-10 05:18:14.922117] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:21.775 [2024-11-10 05:18:14.922124] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:16:21.775 [2024-11-10 05:18:14.922130] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:21.775 [2024-11-10 05:18:14.922136] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:21.775 [2024-11-10 05:18:14.922142] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:16:21.775 [2024-11-10 05:18:14.922148] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:21.775 [2024-11-10 05:18:14.922153] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:21.775 [2024-11-10 05:18:14.922159] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:16:21.775 [2024-11-10 05:18:14.922164] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:21.775 [2024-11-10 05:18:14.922170] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:21.775 [2024-11-10 05:18:14.922175] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:16:21.775 [2024-11-10 05:18:14.922181] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:21.775 [2024-11-10 05:18:14.922186] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:21.775 [2024-11-10 05:18:14.922192] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:16:21.775 [2024-11-10 05:18:14.922197] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:21.775 [2024-11-10 05:18:14.922204] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:21.775 [2024-11-10 05:18:14.922209] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:16:21.775 [2024-11-10 05:18:14.922216] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:21.775 [2024-11-10 05:18:14.922221] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:21.775 [2024-11-10 05:18:14.922228] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:16:21.775 [2024-11-10 05:18:14.922234] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:21.775 [2024-11-10 05:18:14.922241] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:21.775 [2024-11-10 05:18:14.922247] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:16:21.775 [2024-11-10 05:18:14.922254] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:21.775 [2024-11-10 05:18:14.922260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:21.775 [2024-11-10 05:18:14.922268] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:16:21.775 [2024-11-10 05:18:14.922273] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:21.775 [2024-11-10 05:18:14.922280] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:21.775 [2024-11-10 05:18:14.922287] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:16:21.775 [2024-11-10 05:18:14.922294] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:21.775 [2024-11-10 05:18:14.922299] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:21.775 [2024-11-10 05:18:14.922311] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:21.775 [2024-11-10 05:18:14.922317] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:21.775 [2024-11-10 05:18:14.922325] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:21.775 [2024-11-10 05:18:14.922332] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:21.775 [2024-11-10 05:18:14.922339] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:21.775 [2024-11-10 05:18:14.922345] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:21.775 [2024-11-10 05:18:14.922352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:21.775 [2024-11-10 05:18:14.922358] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:21.775 [2024-11-10 05:18:14.922365] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:21.775 [2024-11-10 05:18:14.922373] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:21.775 [2024-11-10 05:18:14.922382] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:21.775 [2024-11-10 05:18:14.922392] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:16:21.776 [2024-11-10 05:18:14.922399] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:16:21.776 [2024-11-10 05:18:14.922405] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:16:21.776 [2024-11-10 05:18:14.922414] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:16:21.776 [2024-11-10 05:18:14.922420] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:16:21.776 [2024-11-10 05:18:14.922429] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:16:21.776 [2024-11-10 05:18:14.922435] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:16:21.776 [2024-11-10 05:18:14.922442] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:16:21.776 [2024-11-10 05:18:14.922449] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:16:21.776 [2024-11-10 05:18:14.922456] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:16:21.776 [2024-11-10 05:18:14.922462] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:16:21.776 [2024-11-10 05:18:14.922469] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:16:21.776 [2024-11-10 05:18:14.922475] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:16:21.776 [2024-11-10 05:18:14.922483] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:16:21.776 [2024-11-10 05:18:14.922488] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:21.776 [2024-11-10 05:18:14.922496] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:21.776 [2024-11-10 05:18:14.922503] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:21.776 [2024-11-10 05:18:14.922510] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:21.776 [2024-11-10 05:18:14.922517] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:21.776 [2024-11-10 05:18:14.922524] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:21.776 [2024-11-10 05:18:14.922531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.776 [2024-11-10 05:18:14.922541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:21.776 [2024-11-10 05:18:14.922547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.569 ms 00:16:21.776 [2024-11-10 05:18:14.922554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.776 [2024-11-10 05:18:14.922581] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:21.776 [2024-11-10 05:18:14.922591] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:16:24.308 [2024-11-10 05:18:17.046902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.308 [2024-11-10 05:18:17.046965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:24.308 [2024-11-10 05:18:17.046979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2124.308 ms 00:16:24.308 [2024-11-10 05:18:17.047007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.308 [2024-11-10 05:18:17.065824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.308 [2024-11-10 05:18:17.066242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:24.308 [2024-11-10 05:18:17.066290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.735 ms 00:16:24.308 [2024-11-10 05:18:17.066333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.308 [2024-11-10 05:18:17.066569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.308 [2024-11-10 05:18:17.066600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:24.308 [2024-11-10 05:18:17.066625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.165 ms 00:16:24.308 [2024-11-10 05:18:17.066653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.308 [2024-11-10 05:18:17.078461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.308 [2024-11-10 05:18:17.078500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:24.308 [2024-11-10 05:18:17.078510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.685 ms 00:16:24.308 [2024-11-10 05:18:17.078523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.308 [2024-11-10 05:18:17.078547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.308 [2024-11-10 05:18:17.078557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:24.308 [2024-11-10 05:18:17.078568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:16:24.308 [2024-11-10 05:18:17.078576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.308 [2024-11-10 05:18:17.078898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.308 [2024-11-10 05:18:17.078915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:24.308 [2024-11-10 05:18:17.078924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.276 ms 00:16:24.308 [2024-11-10 05:18:17.078934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.308 [2024-11-10 05:18:17.079057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.308 [2024-11-10 05:18:17.079068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:24.308 [2024-11-10 05:18:17.079076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:16:24.308 [2024-11-10 05:18:17.079085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.308 [2024-11-10 05:18:17.083306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.308 [2024-11-10 05:18:17.083343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:24.308 [2024-11-10 05:18:17.083355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.205 ms 00:16:24.308 [2024-11-10 05:18:17.083364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.308 [2024-11-10 05:18:17.091483] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:16:24.308 [2024-11-10 05:18:17.096325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.308 [2024-11-10 05:18:17.096357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:24.308 [2024-11-10 05:18:17.096369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.910 ms 00:16:24.308 [2024-11-10 05:18:17.096377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.308 [2024-11-10 05:18:17.147704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.308 [2024-11-10 05:18:17.147754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:24.308 [2024-11-10 05:18:17.147769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.302 ms 00:16:24.308 [2024-11-10 05:18:17.147776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.308 [2024-11-10 05:18:17.147956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.308 [2024-11-10 05:18:17.147969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:24.308 [2024-11-10 05:18:17.147978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:16:24.308 [2024-11-10 05:18:17.147986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.308 [2024-11-10 05:18:17.151028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.308 [2024-11-10 05:18:17.151058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:24.308 [2024-11-10 05:18:17.151070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.005 ms 00:16:24.308 [2024-11-10 05:18:17.151077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.308 [2024-11-10 05:18:17.153505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.308 [2024-11-10 05:18:17.153640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:24.308 [2024-11-10 05:18:17.153659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.394 ms 00:16:24.308 [2024-11-10 05:18:17.153668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.308 [2024-11-10 05:18:17.153956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.308 [2024-11-10 05:18:17.153971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:24.308 [2024-11-10 05:18:17.153986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:16:24.308 [2024-11-10 05:18:17.154012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.308 [2024-11-10 05:18:17.178951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.308 [2024-11-10 05:18:17.178988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:24.308 [2024-11-10 05:18:17.179016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.917 ms 00:16:24.308 [2024-11-10 05:18:17.179024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.308 [2024-11-10 05:18:17.182792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.308 [2024-11-10 05:18:17.182825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:24.308 [2024-11-10 05:18:17.182839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.724 ms 00:16:24.308 [2024-11-10 05:18:17.182847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.308 [2024-11-10 05:18:17.185768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.308 [2024-11-10 05:18:17.185799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:16:24.308 [2024-11-10 05:18:17.185812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.888 ms 00:16:24.308 [2024-11-10 05:18:17.185819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.308 [2024-11-10 05:18:17.188984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.308 [2024-11-10 05:18:17.189027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:24.308 [2024-11-10 05:18:17.189042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.131 ms 00:16:24.308 [2024-11-10 05:18:17.189050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.308 [2024-11-10 05:18:17.189086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.308 [2024-11-10 05:18:17.189095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:24.309 [2024-11-10 05:18:17.189108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:24.309 [2024-11-10 05:18:17.189119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.309 [2024-11-10 05:18:17.189186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.309 [2024-11-10 05:18:17.189195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:24.309 [2024-11-10 05:18:17.189205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:16:24.309 [2024-11-10 05:18:17.189214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.309 [2024-11-10 05:18:17.190109] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2275.685 ms, result 0 00:16:24.309 { 00:16:24.309 "name": "ftl0", 00:16:24.309 "uuid": "1c286cba-f8a3-4a33-9dd4-cc54c9e750a4" 00:16:24.309 } 00:16:24.309 05:18:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:16:24.309 05:18:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:16:24.309 05:18:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:16:24.309 05:18:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:16:24.309 [2024-11-10 05:18:17.494890] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:24.309 I/O size of 69632 is greater than zero copy threshold (65536). 00:16:24.309 Zero copy mechanism will not be used. 00:16:24.309 Running I/O for 4 seconds... 00:16:26.614 3143.00 IOPS, 208.71 MiB/s [2024-11-10T05:18:20.782Z] 3219.50 IOPS, 213.79 MiB/s [2024-11-10T05:18:21.716Z] 3258.67 IOPS, 216.40 MiB/s [2024-11-10T05:18:21.716Z] 3260.00 IOPS, 216.48 MiB/s 00:16:28.480 Latency(us) 00:16:28.480 [2024-11-10T05:18:21.716Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:28.480 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:16:28.480 ftl0 : 4.00 3259.26 216.44 0.00 0.00 322.25 152.02 2054.30 00:16:28.480 [2024-11-10T05:18:21.716Z] =================================================================================================================== 00:16:28.480 [2024-11-10T05:18:21.717Z] Total : 3259.26 216.44 0.00 0.00 322.25 152.02 2054.30 00:16:28.481 { 00:16:28.481 "results": [ 00:16:28.481 { 00:16:28.481 "job": "ftl0", 00:16:28.481 "core_mask": "0x1", 00:16:28.481 "workload": "randwrite", 00:16:28.481 "status": "finished", 00:16:28.481 "queue_depth": 1, 00:16:28.481 "io_size": 69632, 00:16:28.481 "runtime": 4.001518, 00:16:28.481 "iops": 3259.263109649888, 00:16:28.481 "mibps": 216.43544087518788, 00:16:28.481 "io_failed": 0, 00:16:28.481 "io_timeout": 0, 00:16:28.481 "avg_latency_us": 322.25173581211, 00:16:28.481 "min_latency_us": 152.02461538461537, 00:16:28.481 "max_latency_us": 2054.3015384615383 00:16:28.481 } 00:16:28.481 ], 00:16:28.481 "core_count": 1 00:16:28.481 } 00:16:28.481 [2024-11-10 05:18:21.501938] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:28.481 05:18:21 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:16:28.481 [2024-11-10 05:18:21.606429] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:28.481 Running I/O for 4 seconds... 00:16:30.793 11531.00 IOPS, 45.04 MiB/s [2024-11-10T05:18:24.965Z] 11233.00 IOPS, 43.88 MiB/s [2024-11-10T05:18:25.902Z] 11047.67 IOPS, 43.15 MiB/s [2024-11-10T05:18:25.902Z] 11046.00 IOPS, 43.15 MiB/s 00:16:32.666 Latency(us) 00:16:32.666 [2024-11-10T05:18:25.902Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:32.666 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:16:32.666 ftl0 : 4.02 11028.29 43.08 0.00 0.00 11579.40 231.58 26617.70 00:16:32.666 [2024-11-10T05:18:25.902Z] =================================================================================================================== 00:16:32.666 [2024-11-10T05:18:25.902Z] Total : 11028.29 43.08 0.00 0.00 11579.40 0.00 26617.70 00:16:32.666 [2024-11-10 05:18:25.638125] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:32.666 { 00:16:32.666 "results": [ 00:16:32.666 { 00:16:32.666 "job": "ftl0", 00:16:32.666 "core_mask": "0x1", 00:16:32.666 "workload": "randwrite", 00:16:32.666 "status": "finished", 00:16:32.666 "queue_depth": 128, 00:16:32.666 "io_size": 4096, 00:16:32.666 "runtime": 4.017848, 00:16:32.666 "iops": 11028.291762157254, 00:16:32.666 "mibps": 43.079264695926774, 00:16:32.666 "io_failed": 0, 00:16:32.666 "io_timeout": 0, 00:16:32.666 "avg_latency_us": 11579.397777476868, 00:16:32.666 "min_latency_us": 231.58153846153846, 00:16:32.666 "max_latency_us": 26617.69846153846 00:16:32.666 } 00:16:32.666 ], 00:16:32.666 "core_count": 1 00:16:32.666 } 00:16:32.666 05:18:25 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:16:32.666 [2024-11-10 05:18:25.735637] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:32.666 Running I/O for 4 seconds... 00:16:34.553 8957.00 IOPS, 34.99 MiB/s [2024-11-10T05:18:29.165Z] 9006.00 IOPS, 35.18 MiB/s [2024-11-10T05:18:30.102Z] 8900.00 IOPS, 34.77 MiB/s [2024-11-10T05:18:30.102Z] 8903.25 IOPS, 34.78 MiB/s 00:16:36.866 Latency(us) 00:16:36.866 [2024-11-10T05:18:30.102Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:36.866 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:36.866 Verification LBA range: start 0x0 length 0x1400000 00:16:36.866 ftl0 : 4.01 8915.27 34.83 0.00 0.00 14313.93 217.40 24399.56 00:16:36.866 [2024-11-10T05:18:30.102Z] =================================================================================================================== 00:16:36.866 [2024-11-10T05:18:30.102Z] Total : 8915.27 34.83 0.00 0.00 14313.93 0.00 24399.56 00:16:36.866 [2024-11-10 05:18:29.751435] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:36.866 { 00:16:36.866 "results": [ 00:16:36.866 { 00:16:36.866 "job": "ftl0", 00:16:36.866 "core_mask": "0x1", 00:16:36.866 "workload": "verify", 00:16:36.866 "status": "finished", 00:16:36.866 "verify_range": { 00:16:36.866 "start": 0, 00:16:36.866 "length": 20971520 00:16:36.866 }, 00:16:36.866 "queue_depth": 128, 00:16:36.866 "io_size": 4096, 00:16:36.866 "runtime": 4.008965, 00:16:36.866 "iops": 8915.268654128933, 00:16:36.866 "mibps": 34.825268180191145, 00:16:36.866 "io_failed": 0, 00:16:36.866 "io_timeout": 0, 00:16:36.866 "avg_latency_us": 14313.931590395, 00:16:36.866 "min_latency_us": 217.40307692307692, 00:16:36.866 "max_latency_us": 24399.55692307692 00:16:36.866 } 00:16:36.866 ], 00:16:36.866 "core_count": 1 00:16:36.866 } 00:16:36.866 05:18:29 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:16:36.866 [2024-11-10 05:18:29.951718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.866 [2024-11-10 05:18:29.951762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:36.866 [2024-11-10 05:18:29.951776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:36.866 [2024-11-10 05:18:29.951784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.866 [2024-11-10 05:18:29.951805] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:36.866 [2024-11-10 05:18:29.952254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.866 [2024-11-10 05:18:29.952276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:36.866 [2024-11-10 05:18:29.952288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.436 ms 00:16:36.866 [2024-11-10 05:18:29.952300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.866 [2024-11-10 05:18:29.954018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.866 [2024-11-10 05:18:29.954058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:36.866 [2024-11-10 05:18:29.954069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.699 ms 00:16:36.866 [2024-11-10 05:18:29.954080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.866 [2024-11-10 05:18:30.092039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.866 [2024-11-10 05:18:30.092097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:36.866 [2024-11-10 05:18:30.092114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 137.941 ms 00:16:36.866 [2024-11-10 05:18:30.092124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.866 [2024-11-10 05:18:30.098405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.866 [2024-11-10 05:18:30.098437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:36.866 [2024-11-10 05:18:30.098448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.249 ms 00:16:36.866 [2024-11-10 05:18:30.098458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.126 [2024-11-10 05:18:30.099537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.126 [2024-11-10 05:18:30.099571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:37.126 [2024-11-10 05:18:30.099580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.038 ms 00:16:37.126 [2024-11-10 05:18:30.099589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.126 [2024-11-10 05:18:30.103328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.126 [2024-11-10 05:18:30.103363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:37.126 [2024-11-10 05:18:30.103372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.712 ms 00:16:37.126 [2024-11-10 05:18:30.103386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.126 [2024-11-10 05:18:30.103491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.126 [2024-11-10 05:18:30.103502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:37.126 [2024-11-10 05:18:30.103510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:16:37.126 [2024-11-10 05:18:30.103518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.126 [2024-11-10 05:18:30.105018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.126 [2024-11-10 05:18:30.105174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:37.126 [2024-11-10 05:18:30.105188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.485 ms 00:16:37.126 [2024-11-10 05:18:30.105197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.126 [2024-11-10 05:18:30.106305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.126 [2024-11-10 05:18:30.106333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:37.127 [2024-11-10 05:18:30.106341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.080 ms 00:16:37.127 [2024-11-10 05:18:30.106349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.127 [2024-11-10 05:18:30.107368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.127 [2024-11-10 05:18:30.107401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:37.127 [2024-11-10 05:18:30.107410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.954 ms 00:16:37.127 [2024-11-10 05:18:30.107420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.127 [2024-11-10 05:18:30.108348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.127 [2024-11-10 05:18:30.108465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:37.127 [2024-11-10 05:18:30.108478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.880 ms 00:16:37.127 [2024-11-10 05:18:30.108486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.127 [2024-11-10 05:18:30.108512] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:37.127 [2024-11-10 05:18:30.108527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:37.127 [2024-11-10 05:18:30.108540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:37.127 [2024-11-10 05:18:30.108550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:37.127 [2024-11-10 05:18:30.108557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:37.127 [2024-11-10 05:18:30.108566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:37.127 [2024-11-10 05:18:30.108574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:37.127 [2024-11-10 05:18:30.108582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:37.127 [2024-11-10 05:18:30.108590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:37.127 [2024-11-10 05:18:30.108599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:37.127 [2024-11-10 05:18:30.108606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:37.127 [2024-11-10 05:18:30.108617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:37.127 [2024-11-10 05:18:30.108625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:37.127 [2024-11-10 05:18:30.108634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:37.127 [2024-11-10 05:18:30.108641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:37.127 [2024-11-10 05:18:30.108649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:37.127 [2024-11-10 05:18:30.108657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:37.127 [2024-11-10 05:18:30.108665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:37.127 [2024-11-10 05:18:30.108673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:37.127 [2024-11-10 05:18:30.108681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:37.127 [2024-11-10 05:18:30.108689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:37.127 [2024-11-10 05:18:30.108699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:37.127 [2024-11-10 05:18:30.108707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:37.127 [2024-11-10 05:18:30.108715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:37.127 [2024-11-10 05:18:30.108722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:37.127 [2024-11-10 05:18:30.108731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:37.127 [2024-11-10 05:18:30.108738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:37.127 [2024-11-10 05:18:30.108754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:37.127 [2024-11-10 05:18:30.108762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:37.127 [2024-11-10 05:18:30.108771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:37.127 [2024-11-10 05:18:30.108778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:37.127 [2024-11-10 05:18:30.108787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:37.127 [2024-11-10 05:18:30.108794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:37.127 [2024-11-10 05:18:30.108803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:37.127 [2024-11-10 05:18:30.108812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:37.127 [2024-11-10 05:18:30.108821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:37.127 [2024-11-10 05:18:30.108828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:37.127 [2024-11-10 05:18:30.108836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:37.127 [2024-11-10 05:18:30.108844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:37.127 [2024-11-10 05:18:30.108853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:37.127 [2024-11-10 05:18:30.108860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:37.127 [2024-11-10 05:18:30.108869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:37.127 [2024-11-10 05:18:30.108876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:37.127 [2024-11-10 05:18:30.108886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:37.127 [2024-11-10 05:18:30.108894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:37.127 [2024-11-10 05:18:30.108903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:37.127 [2024-11-10 05:18:30.108910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:37.127 [2024-11-10 05:18:30.108920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:37.127 [2024-11-10 05:18:30.108932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:37.127 [2024-11-10 05:18:30.108940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:37.127 [2024-11-10 05:18:30.108948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:37.127 [2024-11-10 05:18:30.108957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:37.127 [2024-11-10 05:18:30.108964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:37.127 [2024-11-10 05:18:30.108972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:37.127 [2024-11-10 05:18:30.108979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:37.127 [2024-11-10 05:18:30.108997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:37.127 [2024-11-10 05:18:30.109005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:37.127 [2024-11-10 05:18:30.109015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:37.127 [2024-11-10 05:18:30.109022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:37.127 [2024-11-10 05:18:30.109033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:37.127 [2024-11-10 05:18:30.109040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:37.127 [2024-11-10 05:18:30.109049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:37.127 [2024-11-10 05:18:30.109064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:37.127 [2024-11-10 05:18:30.109073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:37.127 [2024-11-10 05:18:30.109080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:37.127 [2024-11-10 05:18:30.109089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:37.127 [2024-11-10 05:18:30.109096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:37.127 [2024-11-10 05:18:30.109106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:37.127 [2024-11-10 05:18:30.109113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:37.127 [2024-11-10 05:18:30.109122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:37.127 [2024-11-10 05:18:30.109129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:37.127 [2024-11-10 05:18:30.109138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:37.127 [2024-11-10 05:18:30.109145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:37.127 [2024-11-10 05:18:30.109155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:37.127 [2024-11-10 05:18:30.109163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:37.127 [2024-11-10 05:18:30.109173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:37.127 [2024-11-10 05:18:30.109180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:37.127 [2024-11-10 05:18:30.109189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:37.128 [2024-11-10 05:18:30.109196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:37.128 [2024-11-10 05:18:30.109205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:37.128 [2024-11-10 05:18:30.109212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:37.128 [2024-11-10 05:18:30.109221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:37.128 [2024-11-10 05:18:30.109228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:37.128 [2024-11-10 05:18:30.109236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:37.128 [2024-11-10 05:18:30.109244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:37.128 [2024-11-10 05:18:30.109252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:37.128 [2024-11-10 05:18:30.109259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:37.128 [2024-11-10 05:18:30.109268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:37.128 [2024-11-10 05:18:30.109275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:37.128 [2024-11-10 05:18:30.109284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:37.128 [2024-11-10 05:18:30.109291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:37.128 [2024-11-10 05:18:30.109301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:37.128 [2024-11-10 05:18:30.109308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:37.128 [2024-11-10 05:18:30.109317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:37.128 [2024-11-10 05:18:30.109324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:37.128 [2024-11-10 05:18:30.109332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:37.128 [2024-11-10 05:18:30.109339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:37.128 [2024-11-10 05:18:30.109349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:37.128 [2024-11-10 05:18:30.109356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:37.128 [2024-11-10 05:18:30.109366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:37.128 [2024-11-10 05:18:30.109374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:37.128 [2024-11-10 05:18:30.109391] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:37.128 [2024-11-10 05:18:30.109399] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1c286cba-f8a3-4a33-9dd4-cc54c9e750a4 00:16:37.128 [2024-11-10 05:18:30.109408] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:37.128 [2024-11-10 05:18:30.109416] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:37.128 [2024-11-10 05:18:30.109424] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:37.128 [2024-11-10 05:18:30.109432] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:37.128 [2024-11-10 05:18:30.109441] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:37.128 [2024-11-10 05:18:30.109449] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:37.128 [2024-11-10 05:18:30.109457] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:37.128 [2024-11-10 05:18:30.109463] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:37.128 [2024-11-10 05:18:30.109471] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:37.128 [2024-11-10 05:18:30.109478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.128 [2024-11-10 05:18:30.109486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:37.128 [2024-11-10 05:18:30.109500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.967 ms 00:16:37.128 [2024-11-10 05:18:30.109510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.128 [2024-11-10 05:18:30.110814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.128 [2024-11-10 05:18:30.110839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:37.128 [2024-11-10 05:18:30.110848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.289 ms 00:16:37.128 [2024-11-10 05:18:30.110857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.128 [2024-11-10 05:18:30.110925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.128 [2024-11-10 05:18:30.110935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:37.128 [2024-11-10 05:18:30.110943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:16:37.128 [2024-11-10 05:18:30.110953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.128 [2024-11-10 05:18:30.115345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:37.128 [2024-11-10 05:18:30.115446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:37.128 [2024-11-10 05:18:30.115498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:37.128 [2024-11-10 05:18:30.115523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.128 [2024-11-10 05:18:30.115586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:37.128 [2024-11-10 05:18:30.115734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:37.128 [2024-11-10 05:18:30.115758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:37.128 [2024-11-10 05:18:30.115778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.128 [2024-11-10 05:18:30.115847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:37.128 [2024-11-10 05:18:30.116014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:37.128 [2024-11-10 05:18:30.116039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:37.128 [2024-11-10 05:18:30.116065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.128 [2024-11-10 05:18:30.116092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:37.128 [2024-11-10 05:18:30.116149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:37.128 [2024-11-10 05:18:30.116172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:37.128 [2024-11-10 05:18:30.116193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.128 [2024-11-10 05:18:30.124552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:37.128 [2024-11-10 05:18:30.124688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:37.128 [2024-11-10 05:18:30.124739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:37.128 [2024-11-10 05:18:30.124763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.128 [2024-11-10 05:18:30.132124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:37.128 [2024-11-10 05:18:30.132258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:37.128 [2024-11-10 05:18:30.132317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:37.128 [2024-11-10 05:18:30.132343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.128 [2024-11-10 05:18:30.132497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:37.128 [2024-11-10 05:18:30.132535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:37.128 [2024-11-10 05:18:30.132620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:37.128 [2024-11-10 05:18:30.132645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.128 [2024-11-10 05:18:30.132691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:37.128 [2024-11-10 05:18:30.132793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:37.128 [2024-11-10 05:18:30.132805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:37.128 [2024-11-10 05:18:30.132816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.128 [2024-11-10 05:18:30.132884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:37.128 [2024-11-10 05:18:30.132898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:37.128 [2024-11-10 05:18:30.132906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:37.128 [2024-11-10 05:18:30.132919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.128 [2024-11-10 05:18:30.132946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:37.128 [2024-11-10 05:18:30.132956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:37.128 [2024-11-10 05:18:30.132964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:37.128 [2024-11-10 05:18:30.132973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.128 [2024-11-10 05:18:30.133015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:37.128 [2024-11-10 05:18:30.133025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:37.128 [2024-11-10 05:18:30.133035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:37.128 [2024-11-10 05:18:30.133044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.128 [2024-11-10 05:18:30.133081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:37.128 [2024-11-10 05:18:30.133095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:37.128 [2024-11-10 05:18:30.133102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:37.128 [2024-11-10 05:18:30.133113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.129 [2024-11-10 05:18:30.133227] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 181.483 ms, result 0 00:16:37.129 true 00:16:37.129 05:18:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 84975 00:16:37.129 05:18:30 ftl.ftl_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 84975 ']' 00:16:37.129 05:18:30 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # kill -0 84975 00:16:37.129 05:18:30 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # uname 00:16:37.129 05:18:30 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:37.129 05:18:30 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84975 00:16:37.129 killing process with pid 84975 00:16:37.129 Received shutdown signal, test time was about 4.000000 seconds 00:16:37.129 00:16:37.129 Latency(us) 00:16:37.129 [2024-11-10T05:18:30.365Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:37.129 [2024-11-10T05:18:30.365Z] =================================================================================================================== 00:16:37.129 [2024-11-10T05:18:30.365Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:37.129 05:18:30 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:37.129 05:18:30 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:37.129 05:18:30 ftl.ftl_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84975' 00:16:37.129 05:18:30 ftl.ftl_bdevperf -- common/autotest_common.sh@969 -- # kill 84975 00:16:37.129 05:18:30 ftl.ftl_bdevperf -- common/autotest_common.sh@974 -- # wait 84975 00:16:42.395 Remove shared memory files 00:16:42.395 05:18:35 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:16:42.395 05:18:35 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:16:42.395 05:18:35 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:16:42.395 05:18:35 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:16:42.395 05:18:35 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:16:42.395 05:18:35 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:16:42.395 05:18:35 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:16:42.395 05:18:35 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:16:42.395 ************************************ 00:16:42.395 END TEST ftl_bdevperf 00:16:42.395 ************************************ 00:16:42.395 00:16:42.395 real 0m24.087s 00:16:42.395 user 0m26.645s 00:16:42.395 sys 0m0.816s 00:16:42.395 05:18:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:42.395 05:18:35 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:42.395 05:18:35 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:16:42.395 05:18:35 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:42.395 05:18:35 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:42.395 05:18:35 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:42.395 ************************************ 00:16:42.395 START TEST ftl_trim 00:16:42.395 ************************************ 00:16:42.395 05:18:35 ftl.ftl_trim -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:16:42.395 * Looking for test storage... 00:16:42.395 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:42.395 05:18:35 ftl.ftl_trim -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:42.395 05:18:35 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # lcov --version 00:16:42.395 05:18:35 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:42.395 05:18:35 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:42.395 05:18:35 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:42.395 05:18:35 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:42.395 05:18:35 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:42.395 05:18:35 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:16:42.395 05:18:35 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:16:42.395 05:18:35 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:16:42.395 05:18:35 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:16:42.395 05:18:35 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:16:42.395 05:18:35 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:16:42.395 05:18:35 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:16:42.395 05:18:35 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:42.395 05:18:35 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:16:42.395 05:18:35 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:16:42.395 05:18:35 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:42.395 05:18:35 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:42.395 05:18:35 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:16:42.395 05:18:35 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:16:42.395 05:18:35 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:42.395 05:18:35 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:16:42.395 05:18:35 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:16:42.395 05:18:35 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:16:42.395 05:18:35 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:16:42.395 05:18:35 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:42.395 05:18:35 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:16:42.395 05:18:35 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:16:42.395 05:18:35 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:42.395 05:18:35 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:42.395 05:18:35 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:16:42.395 05:18:35 ftl.ftl_trim -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:42.395 05:18:35 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:42.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:42.395 --rc genhtml_branch_coverage=1 00:16:42.395 --rc genhtml_function_coverage=1 00:16:42.395 --rc genhtml_legend=1 00:16:42.395 --rc geninfo_all_blocks=1 00:16:42.395 --rc geninfo_unexecuted_blocks=1 00:16:42.395 00:16:42.395 ' 00:16:42.395 05:18:35 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:42.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:42.395 --rc genhtml_branch_coverage=1 00:16:42.395 --rc genhtml_function_coverage=1 00:16:42.395 --rc genhtml_legend=1 00:16:42.395 --rc geninfo_all_blocks=1 00:16:42.395 --rc geninfo_unexecuted_blocks=1 00:16:42.395 00:16:42.395 ' 00:16:42.395 05:18:35 ftl.ftl_trim -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:42.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:42.395 --rc genhtml_branch_coverage=1 00:16:42.395 --rc genhtml_function_coverage=1 00:16:42.395 --rc genhtml_legend=1 00:16:42.395 --rc geninfo_all_blocks=1 00:16:42.395 --rc geninfo_unexecuted_blocks=1 00:16:42.395 00:16:42.395 ' 00:16:42.395 05:18:35 ftl.ftl_trim -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:42.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:42.395 --rc genhtml_branch_coverage=1 00:16:42.395 --rc genhtml_function_coverage=1 00:16:42.395 --rc genhtml_legend=1 00:16:42.395 --rc geninfo_all_blocks=1 00:16:42.395 --rc geninfo_unexecuted_blocks=1 00:16:42.395 00:16:42.395 ' 00:16:42.395 05:18:35 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:42.395 05:18:35 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:16:42.395 05:18:35 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:42.395 05:18:35 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:42.395 05:18:35 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:42.395 05:18:35 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:42.395 05:18:35 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:42.395 05:18:35 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:42.395 05:18:35 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:42.395 05:18:35 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:42.395 05:18:35 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:42.395 05:18:35 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:42.395 05:18:35 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:42.395 05:18:35 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:42.395 05:18:35 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:42.395 05:18:35 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:42.395 05:18:35 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:42.395 05:18:35 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:42.395 05:18:35 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:42.395 05:18:35 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:42.395 05:18:35 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:42.395 05:18:35 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:42.395 05:18:35 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:42.395 05:18:35 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:42.396 05:18:35 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:42.396 05:18:35 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:42.396 05:18:35 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:42.396 05:18:35 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:42.396 05:18:35 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:42.396 05:18:35 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:42.396 05:18:35 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:16:42.396 05:18:35 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:16:42.396 05:18:35 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:16:42.396 05:18:35 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:16:42.396 05:18:35 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:16:42.396 05:18:35 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:16:42.396 05:18:35 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:16:42.396 05:18:35 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:16:42.396 05:18:35 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:42.396 05:18:35 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:42.396 05:18:35 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:16:42.396 05:18:35 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=85314 00:16:42.396 05:18:35 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:16:42.396 05:18:35 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 85314 00:16:42.396 05:18:35 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 85314 ']' 00:16:42.396 05:18:35 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:42.396 05:18:35 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:42.396 05:18:35 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:42.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:42.396 05:18:35 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:42.396 05:18:35 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:16:42.396 [2024-11-10 05:18:35.524511] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:42.396 [2024-11-10 05:18:35.524794] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85314 ] 00:16:42.654 [2024-11-10 05:18:35.681112] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:42.654 [2024-11-10 05:18:35.714208] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:42.654 [2024-11-10 05:18:35.714453] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:42.654 [2024-11-10 05:18:35.714510] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:16:43.219 05:18:36 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:43.219 05:18:36 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:16:43.219 05:18:36 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:43.219 05:18:36 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:16:43.219 05:18:36 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:43.219 05:18:36 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:16:43.219 05:18:36 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:16:43.219 05:18:36 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:43.477 05:18:36 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:43.477 05:18:36 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:16:43.477 05:18:36 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:43.477 05:18:36 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:16:43.477 05:18:36 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:43.477 05:18:36 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:16:43.477 05:18:36 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:16:43.477 05:18:36 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:43.736 05:18:36 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:43.736 { 00:16:43.736 "name": "nvme0n1", 00:16:43.736 "aliases": [ 00:16:43.736 "45603106-baec-41f3-9fd5-ce74e743604e" 00:16:43.736 ], 00:16:43.736 "product_name": "NVMe disk", 00:16:43.736 "block_size": 4096, 00:16:43.736 "num_blocks": 1310720, 00:16:43.736 "uuid": "45603106-baec-41f3-9fd5-ce74e743604e", 00:16:43.736 "numa_id": -1, 00:16:43.736 "assigned_rate_limits": { 00:16:43.736 "rw_ios_per_sec": 0, 00:16:43.736 "rw_mbytes_per_sec": 0, 00:16:43.736 "r_mbytes_per_sec": 0, 00:16:43.736 "w_mbytes_per_sec": 0 00:16:43.736 }, 00:16:43.736 "claimed": true, 00:16:43.736 "claim_type": "read_many_write_one", 00:16:43.736 "zoned": false, 00:16:43.736 "supported_io_types": { 00:16:43.736 "read": true, 00:16:43.736 "write": true, 00:16:43.736 "unmap": true, 00:16:43.736 "flush": true, 00:16:43.736 "reset": true, 00:16:43.736 "nvme_admin": true, 00:16:43.736 "nvme_io": true, 00:16:43.736 "nvme_io_md": false, 00:16:43.736 "write_zeroes": true, 00:16:43.736 "zcopy": false, 00:16:43.736 "get_zone_info": false, 00:16:43.736 "zone_management": false, 00:16:43.736 "zone_append": false, 00:16:43.736 "compare": true, 00:16:43.736 "compare_and_write": false, 00:16:43.736 "abort": true, 00:16:43.736 "seek_hole": false, 00:16:43.736 "seek_data": false, 00:16:43.736 "copy": true, 00:16:43.736 "nvme_iov_md": false 00:16:43.736 }, 00:16:43.736 "driver_specific": { 00:16:43.736 "nvme": [ 00:16:43.736 { 00:16:43.736 "pci_address": "0000:00:11.0", 00:16:43.736 "trid": { 00:16:43.736 "trtype": "PCIe", 00:16:43.736 "traddr": "0000:00:11.0" 00:16:43.736 }, 00:16:43.736 "ctrlr_data": { 00:16:43.736 "cntlid": 0, 00:16:43.736 "vendor_id": "0x1b36", 00:16:43.736 "model_number": "QEMU NVMe Ctrl", 00:16:43.736 "serial_number": "12341", 00:16:43.736 "firmware_revision": "8.0.0", 00:16:43.736 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:43.736 "oacs": { 00:16:43.736 "security": 0, 00:16:43.736 "format": 1, 00:16:43.736 "firmware": 0, 00:16:43.736 "ns_manage": 1 00:16:43.736 }, 00:16:43.736 "multi_ctrlr": false, 00:16:43.736 "ana_reporting": false 00:16:43.736 }, 00:16:43.736 "vs": { 00:16:43.736 "nvme_version": "1.4" 00:16:43.736 }, 00:16:43.736 "ns_data": { 00:16:43.736 "id": 1, 00:16:43.736 "can_share": false 00:16:43.736 } 00:16:43.736 } 00:16:43.736 ], 00:16:43.736 "mp_policy": "active_passive" 00:16:43.736 } 00:16:43.736 } 00:16:43.736 ]' 00:16:43.736 05:18:36 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:43.736 05:18:36 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:16:43.736 05:18:36 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:43.736 05:18:36 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=1310720 00:16:43.736 05:18:36 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:16:43.736 05:18:36 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 5120 00:16:43.736 05:18:36 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:16:43.736 05:18:36 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:43.736 05:18:36 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:16:43.736 05:18:36 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:43.736 05:18:36 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:43.995 05:18:37 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=0c0caeb0-461b-4a19-a99f-cd34934825c8 00:16:43.995 05:18:37 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:16:43.995 05:18:37 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0c0caeb0-461b-4a19-a99f-cd34934825c8 00:16:44.254 05:18:37 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:44.514 05:18:37 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=3fbbe496-612e-42e3-a309-c94907fc4c77 00:16:44.514 05:18:37 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 3fbbe496-612e-42e3-a309-c94907fc4c77 00:16:44.772 05:18:37 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=27437359-509a-4c2d-8eeb-a0d0995255a5 00:16:44.772 05:18:37 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 27437359-509a-4c2d-8eeb-a0d0995255a5 00:16:44.772 05:18:37 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:16:44.772 05:18:37 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:44.772 05:18:37 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=27437359-509a-4c2d-8eeb-a0d0995255a5 00:16:44.772 05:18:37 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:16:44.773 05:18:37 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 27437359-509a-4c2d-8eeb-a0d0995255a5 00:16:44.773 05:18:37 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=27437359-509a-4c2d-8eeb-a0d0995255a5 00:16:44.773 05:18:37 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:44.773 05:18:37 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:16:44.773 05:18:37 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:16:44.773 05:18:37 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 27437359-509a-4c2d-8eeb-a0d0995255a5 00:16:44.773 05:18:37 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:44.773 { 00:16:44.773 "name": "27437359-509a-4c2d-8eeb-a0d0995255a5", 00:16:44.773 "aliases": [ 00:16:44.773 "lvs/nvme0n1p0" 00:16:44.773 ], 00:16:44.773 "product_name": "Logical Volume", 00:16:44.773 "block_size": 4096, 00:16:44.773 "num_blocks": 26476544, 00:16:44.773 "uuid": "27437359-509a-4c2d-8eeb-a0d0995255a5", 00:16:44.773 "assigned_rate_limits": { 00:16:44.773 "rw_ios_per_sec": 0, 00:16:44.773 "rw_mbytes_per_sec": 0, 00:16:44.773 "r_mbytes_per_sec": 0, 00:16:44.773 "w_mbytes_per_sec": 0 00:16:44.773 }, 00:16:44.773 "claimed": false, 00:16:44.773 "zoned": false, 00:16:44.773 "supported_io_types": { 00:16:44.773 "read": true, 00:16:44.773 "write": true, 00:16:44.773 "unmap": true, 00:16:44.773 "flush": false, 00:16:44.773 "reset": true, 00:16:44.773 "nvme_admin": false, 00:16:44.773 "nvme_io": false, 00:16:44.773 "nvme_io_md": false, 00:16:44.773 "write_zeroes": true, 00:16:44.773 "zcopy": false, 00:16:44.773 "get_zone_info": false, 00:16:44.773 "zone_management": false, 00:16:44.773 "zone_append": false, 00:16:44.773 "compare": false, 00:16:44.773 "compare_and_write": false, 00:16:44.773 "abort": false, 00:16:44.773 "seek_hole": true, 00:16:44.773 "seek_data": true, 00:16:44.773 "copy": false, 00:16:44.773 "nvme_iov_md": false 00:16:44.773 }, 00:16:44.773 "driver_specific": { 00:16:44.773 "lvol": { 00:16:44.773 "lvol_store_uuid": "3fbbe496-612e-42e3-a309-c94907fc4c77", 00:16:44.773 "base_bdev": "nvme0n1", 00:16:44.773 "thin_provision": true, 00:16:44.773 "num_allocated_clusters": 0, 00:16:44.773 "snapshot": false, 00:16:44.773 "clone": false, 00:16:44.773 "esnap_clone": false 00:16:44.773 } 00:16:44.773 } 00:16:44.773 } 00:16:44.773 ]' 00:16:44.773 05:18:37 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:44.773 05:18:37 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:16:44.773 05:18:37 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:45.031 05:18:38 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:45.031 05:18:38 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:45.031 05:18:38 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:16:45.031 05:18:38 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:16:45.031 05:18:38 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:16:45.031 05:18:38 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:45.290 05:18:38 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:45.290 05:18:38 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:45.290 05:18:38 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 27437359-509a-4c2d-8eeb-a0d0995255a5 00:16:45.290 05:18:38 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=27437359-509a-4c2d-8eeb-a0d0995255a5 00:16:45.290 05:18:38 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:45.290 05:18:38 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:16:45.290 05:18:38 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:16:45.290 05:18:38 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 27437359-509a-4c2d-8eeb-a0d0995255a5 00:16:45.290 05:18:38 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:45.290 { 00:16:45.290 "name": "27437359-509a-4c2d-8eeb-a0d0995255a5", 00:16:45.290 "aliases": [ 00:16:45.290 "lvs/nvme0n1p0" 00:16:45.290 ], 00:16:45.290 "product_name": "Logical Volume", 00:16:45.290 "block_size": 4096, 00:16:45.290 "num_blocks": 26476544, 00:16:45.290 "uuid": "27437359-509a-4c2d-8eeb-a0d0995255a5", 00:16:45.290 "assigned_rate_limits": { 00:16:45.290 "rw_ios_per_sec": 0, 00:16:45.290 "rw_mbytes_per_sec": 0, 00:16:45.290 "r_mbytes_per_sec": 0, 00:16:45.290 "w_mbytes_per_sec": 0 00:16:45.290 }, 00:16:45.290 "claimed": false, 00:16:45.290 "zoned": false, 00:16:45.290 "supported_io_types": { 00:16:45.290 "read": true, 00:16:45.290 "write": true, 00:16:45.290 "unmap": true, 00:16:45.290 "flush": false, 00:16:45.290 "reset": true, 00:16:45.290 "nvme_admin": false, 00:16:45.290 "nvme_io": false, 00:16:45.290 "nvme_io_md": false, 00:16:45.290 "write_zeroes": true, 00:16:45.290 "zcopy": false, 00:16:45.290 "get_zone_info": false, 00:16:45.290 "zone_management": false, 00:16:45.290 "zone_append": false, 00:16:45.290 "compare": false, 00:16:45.290 "compare_and_write": false, 00:16:45.290 "abort": false, 00:16:45.290 "seek_hole": true, 00:16:45.290 "seek_data": true, 00:16:45.290 "copy": false, 00:16:45.290 "nvme_iov_md": false 00:16:45.290 }, 00:16:45.290 "driver_specific": { 00:16:45.290 "lvol": { 00:16:45.290 "lvol_store_uuid": "3fbbe496-612e-42e3-a309-c94907fc4c77", 00:16:45.290 "base_bdev": "nvme0n1", 00:16:45.290 "thin_provision": true, 00:16:45.290 "num_allocated_clusters": 0, 00:16:45.290 "snapshot": false, 00:16:45.290 "clone": false, 00:16:45.290 "esnap_clone": false 00:16:45.290 } 00:16:45.290 } 00:16:45.290 } 00:16:45.290 ]' 00:16:45.290 05:18:38 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:45.549 05:18:38 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:16:45.549 05:18:38 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:45.549 05:18:38 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:45.549 05:18:38 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:45.549 05:18:38 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:16:45.549 05:18:38 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:16:45.549 05:18:38 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:45.549 05:18:38 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:16:45.549 05:18:38 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:16:45.549 05:18:38 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 27437359-509a-4c2d-8eeb-a0d0995255a5 00:16:45.549 05:18:38 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=27437359-509a-4c2d-8eeb-a0d0995255a5 00:16:45.549 05:18:38 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:45.549 05:18:38 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:16:45.549 05:18:38 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:16:45.549 05:18:38 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 27437359-509a-4c2d-8eeb-a0d0995255a5 00:16:45.807 05:18:38 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:45.807 { 00:16:45.808 "name": "27437359-509a-4c2d-8eeb-a0d0995255a5", 00:16:45.808 "aliases": [ 00:16:45.808 "lvs/nvme0n1p0" 00:16:45.808 ], 00:16:45.808 "product_name": "Logical Volume", 00:16:45.808 "block_size": 4096, 00:16:45.808 "num_blocks": 26476544, 00:16:45.808 "uuid": "27437359-509a-4c2d-8eeb-a0d0995255a5", 00:16:45.808 "assigned_rate_limits": { 00:16:45.808 "rw_ios_per_sec": 0, 00:16:45.808 "rw_mbytes_per_sec": 0, 00:16:45.808 "r_mbytes_per_sec": 0, 00:16:45.808 "w_mbytes_per_sec": 0 00:16:45.808 }, 00:16:45.808 "claimed": false, 00:16:45.808 "zoned": false, 00:16:45.808 "supported_io_types": { 00:16:45.808 "read": true, 00:16:45.808 "write": true, 00:16:45.808 "unmap": true, 00:16:45.808 "flush": false, 00:16:45.808 "reset": true, 00:16:45.808 "nvme_admin": false, 00:16:45.808 "nvme_io": false, 00:16:45.808 "nvme_io_md": false, 00:16:45.808 "write_zeroes": true, 00:16:45.808 "zcopy": false, 00:16:45.808 "get_zone_info": false, 00:16:45.808 "zone_management": false, 00:16:45.808 "zone_append": false, 00:16:45.808 "compare": false, 00:16:45.808 "compare_and_write": false, 00:16:45.808 "abort": false, 00:16:45.808 "seek_hole": true, 00:16:45.808 "seek_data": true, 00:16:45.808 "copy": false, 00:16:45.808 "nvme_iov_md": false 00:16:45.808 }, 00:16:45.808 "driver_specific": { 00:16:45.808 "lvol": { 00:16:45.808 "lvol_store_uuid": "3fbbe496-612e-42e3-a309-c94907fc4c77", 00:16:45.808 "base_bdev": "nvme0n1", 00:16:45.808 "thin_provision": true, 00:16:45.808 "num_allocated_clusters": 0, 00:16:45.808 "snapshot": false, 00:16:45.808 "clone": false, 00:16:45.808 "esnap_clone": false 00:16:45.808 } 00:16:45.808 } 00:16:45.808 } 00:16:45.808 ]' 00:16:45.808 05:18:38 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:45.808 05:18:39 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:16:45.808 05:18:39 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:45.808 05:18:39 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:45.808 05:18:39 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:45.808 05:18:39 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:16:45.808 05:18:39 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:16:45.808 05:18:39 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 27437359-509a-4c2d-8eeb-a0d0995255a5 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:16:46.068 [2024-11-10 05:18:39.215883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.068 [2024-11-10 05:18:39.215930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:46.068 [2024-11-10 05:18:39.215944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:46.068 [2024-11-10 05:18:39.215955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.068 [2024-11-10 05:18:39.218387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.068 [2024-11-10 05:18:39.218535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:46.068 [2024-11-10 05:18:39.218552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.403 ms 00:16:46.068 [2024-11-10 05:18:39.218564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.068 [2024-11-10 05:18:39.218656] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:46.068 [2024-11-10 05:18:39.218914] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:46.068 [2024-11-10 05:18:39.218933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.068 [2024-11-10 05:18:39.218943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:46.068 [2024-11-10 05:18:39.218952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:16:46.068 [2024-11-10 05:18:39.218961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.068 [2024-11-10 05:18:39.219087] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID cc1e5a32-ec08-4380-949d-289ffcdd7e88 00:16:46.068 [2024-11-10 05:18:39.220130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.068 [2024-11-10 05:18:39.220162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:46.068 [2024-11-10 05:18:39.220174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:16:46.068 [2024-11-10 05:18:39.220182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.068 [2024-11-10 05:18:39.225116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.068 [2024-11-10 05:18:39.225252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:46.068 [2024-11-10 05:18:39.225270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.858 ms 00:16:46.068 [2024-11-10 05:18:39.225278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.068 [2024-11-10 05:18:39.225404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.068 [2024-11-10 05:18:39.225416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:46.068 [2024-11-10 05:18:39.225427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:16:46.068 [2024-11-10 05:18:39.225446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.068 [2024-11-10 05:18:39.225478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.068 [2024-11-10 05:18:39.225489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:46.068 [2024-11-10 05:18:39.225499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:46.068 [2024-11-10 05:18:39.225506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.068 [2024-11-10 05:18:39.225549] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:46.068 [2024-11-10 05:18:39.226919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.068 [2024-11-10 05:18:39.226953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:46.068 [2024-11-10 05:18:39.226963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.376 ms 00:16:46.068 [2024-11-10 05:18:39.226972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.068 [2024-11-10 05:18:39.227030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.068 [2024-11-10 05:18:39.227052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:46.068 [2024-11-10 05:18:39.227061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:46.068 [2024-11-10 05:18:39.227072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.068 [2024-11-10 05:18:39.227098] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:46.068 [2024-11-10 05:18:39.227231] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:46.068 [2024-11-10 05:18:39.227244] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:46.068 [2024-11-10 05:18:39.227257] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:46.068 [2024-11-10 05:18:39.227267] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:46.068 [2024-11-10 05:18:39.227277] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:46.068 [2024-11-10 05:18:39.227285] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:46.068 [2024-11-10 05:18:39.227305] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:46.068 [2024-11-10 05:18:39.227313] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:46.068 [2024-11-10 05:18:39.227321] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:46.068 [2024-11-10 05:18:39.227329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.068 [2024-11-10 05:18:39.227345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:46.068 [2024-11-10 05:18:39.227352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.232 ms 00:16:46.068 [2024-11-10 05:18:39.227363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.068 [2024-11-10 05:18:39.227454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.068 [2024-11-10 05:18:39.227466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:46.068 [2024-11-10 05:18:39.227473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:16:46.068 [2024-11-10 05:18:39.227482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.068 [2024-11-10 05:18:39.227593] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:46.068 [2024-11-10 05:18:39.227610] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:46.068 [2024-11-10 05:18:39.227619] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:46.068 [2024-11-10 05:18:39.227629] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:46.068 [2024-11-10 05:18:39.227640] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:46.068 [2024-11-10 05:18:39.227649] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:46.068 [2024-11-10 05:18:39.227657] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:46.068 [2024-11-10 05:18:39.227667] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:46.068 [2024-11-10 05:18:39.227675] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:46.068 [2024-11-10 05:18:39.227684] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:46.068 [2024-11-10 05:18:39.227692] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:46.068 [2024-11-10 05:18:39.227704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:46.069 [2024-11-10 05:18:39.227711] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:46.069 [2024-11-10 05:18:39.227722] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:46.069 [2024-11-10 05:18:39.227730] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:46.069 [2024-11-10 05:18:39.227739] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:46.069 [2024-11-10 05:18:39.227747] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:46.069 [2024-11-10 05:18:39.227756] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:46.069 [2024-11-10 05:18:39.227764] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:46.069 [2024-11-10 05:18:39.227773] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:46.069 [2024-11-10 05:18:39.227780] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:46.069 [2024-11-10 05:18:39.227790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:46.069 [2024-11-10 05:18:39.227798] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:46.069 [2024-11-10 05:18:39.227807] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:46.069 [2024-11-10 05:18:39.227815] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:46.069 [2024-11-10 05:18:39.227824] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:46.069 [2024-11-10 05:18:39.227832] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:46.069 [2024-11-10 05:18:39.227841] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:46.069 [2024-11-10 05:18:39.227857] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:46.069 [2024-11-10 05:18:39.227868] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:46.069 [2024-11-10 05:18:39.227876] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:46.069 [2024-11-10 05:18:39.227884] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:46.069 [2024-11-10 05:18:39.227892] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:46.069 [2024-11-10 05:18:39.227901] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:46.069 [2024-11-10 05:18:39.227909] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:46.069 [2024-11-10 05:18:39.227918] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:46.069 [2024-11-10 05:18:39.227926] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:46.069 [2024-11-10 05:18:39.227937] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:46.069 [2024-11-10 05:18:39.227946] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:46.069 [2024-11-10 05:18:39.227955] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:46.069 [2024-11-10 05:18:39.227963] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:46.069 [2024-11-10 05:18:39.227972] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:46.069 [2024-11-10 05:18:39.227979] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:46.069 [2024-11-10 05:18:39.227986] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:46.069 [2024-11-10 05:18:39.228004] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:46.069 [2024-11-10 05:18:39.228015] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:46.069 [2024-11-10 05:18:39.228023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:46.069 [2024-11-10 05:18:39.228041] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:46.069 [2024-11-10 05:18:39.228048] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:46.069 [2024-11-10 05:18:39.228056] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:46.069 [2024-11-10 05:18:39.228062] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:46.069 [2024-11-10 05:18:39.228071] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:46.069 [2024-11-10 05:18:39.228078] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:46.069 [2024-11-10 05:18:39.228090] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:46.069 [2024-11-10 05:18:39.228099] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:46.069 [2024-11-10 05:18:39.228116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:46.069 [2024-11-10 05:18:39.228128] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:46.069 [2024-11-10 05:18:39.228137] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:46.069 [2024-11-10 05:18:39.228144] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:46.069 [2024-11-10 05:18:39.228154] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:46.069 [2024-11-10 05:18:39.228161] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:46.069 [2024-11-10 05:18:39.228171] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:46.069 [2024-11-10 05:18:39.228178] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:46.069 [2024-11-10 05:18:39.228186] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:46.069 [2024-11-10 05:18:39.228193] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:46.069 [2024-11-10 05:18:39.228202] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:46.069 [2024-11-10 05:18:39.228209] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:46.069 [2024-11-10 05:18:39.228219] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:46.069 [2024-11-10 05:18:39.228226] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:46.069 [2024-11-10 05:18:39.228235] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:46.069 [2024-11-10 05:18:39.228243] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:46.069 [2024-11-10 05:18:39.228253] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:46.069 [2024-11-10 05:18:39.228260] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:46.069 [2024-11-10 05:18:39.228269] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:46.069 [2024-11-10 05:18:39.228276] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:46.069 [2024-11-10 05:18:39.228285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.069 [2024-11-10 05:18:39.228292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:46.069 [2024-11-10 05:18:39.228304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.755 ms 00:16:46.069 [2024-11-10 05:18:39.228311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.069 [2024-11-10 05:18:39.228375] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:46.069 [2024-11-10 05:18:39.228385] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:16:48.641 [2024-11-10 05:18:41.347298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.641 [2024-11-10 05:18:41.347504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:48.641 [2024-11-10 05:18:41.347529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2118.906 ms 00:16:48.641 [2024-11-10 05:18:41.347538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.641 [2024-11-10 05:18:41.364864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.641 [2024-11-10 05:18:41.364933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:48.641 [2024-11-10 05:18:41.364960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.235 ms 00:16:48.641 [2024-11-10 05:18:41.364975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.641 [2024-11-10 05:18:41.365254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.641 [2024-11-10 05:18:41.365286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:48.641 [2024-11-10 05:18:41.365307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:16:48.641 [2024-11-10 05:18:41.365321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.641 [2024-11-10 05:18:41.376597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.641 [2024-11-10 05:18:41.376633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:48.641 [2024-11-10 05:18:41.376646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.227 ms 00:16:48.641 [2024-11-10 05:18:41.376654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.641 [2024-11-10 05:18:41.376721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.641 [2024-11-10 05:18:41.376731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:48.641 [2024-11-10 05:18:41.376741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:16:48.641 [2024-11-10 05:18:41.376749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.641 [2024-11-10 05:18:41.377093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.641 [2024-11-10 05:18:41.377109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:48.641 [2024-11-10 05:18:41.377120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.311 ms 00:16:48.641 [2024-11-10 05:18:41.377127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.641 [2024-11-10 05:18:41.377261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.641 [2024-11-10 05:18:41.377272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:48.641 [2024-11-10 05:18:41.377283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:16:48.641 [2024-11-10 05:18:41.377292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.641 [2024-11-10 05:18:41.382534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.641 [2024-11-10 05:18:41.382706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:48.641 [2024-11-10 05:18:41.382724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.211 ms 00:16:48.641 [2024-11-10 05:18:41.382732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.641 [2024-11-10 05:18:41.390892] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:48.641 [2024-11-10 05:18:41.404622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.641 [2024-11-10 05:18:41.404657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:48.641 [2024-11-10 05:18:41.404668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.806 ms 00:16:48.641 [2024-11-10 05:18:41.404688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.641 [2024-11-10 05:18:41.460510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.641 [2024-11-10 05:18:41.460563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:48.642 [2024-11-10 05:18:41.460577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.740 ms 00:16:48.642 [2024-11-10 05:18:41.460590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.642 [2024-11-10 05:18:41.460763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.642 [2024-11-10 05:18:41.460776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:48.642 [2024-11-10 05:18:41.460787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.140 ms 00:16:48.642 [2024-11-10 05:18:41.460797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.642 [2024-11-10 05:18:41.463592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.642 [2024-11-10 05:18:41.463627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:48.642 [2024-11-10 05:18:41.463638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.770 ms 00:16:48.642 [2024-11-10 05:18:41.463647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.642 [2024-11-10 05:18:41.465901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.642 [2024-11-10 05:18:41.466070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:48.642 [2024-11-10 05:18:41.466087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.225 ms 00:16:48.642 [2024-11-10 05:18:41.466098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.642 [2024-11-10 05:18:41.466399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.642 [2024-11-10 05:18:41.466418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:48.642 [2024-11-10 05:18:41.466429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.256 ms 00:16:48.642 [2024-11-10 05:18:41.466439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.642 [2024-11-10 05:18:41.492310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.642 [2024-11-10 05:18:41.492347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:48.642 [2024-11-10 05:18:41.492358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.843 ms 00:16:48.642 [2024-11-10 05:18:41.492378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.642 [2024-11-10 05:18:41.496133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.642 [2024-11-10 05:18:41.496171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:48.642 [2024-11-10 05:18:41.496181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.689 ms 00:16:48.642 [2024-11-10 05:18:41.496194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.642 [2024-11-10 05:18:41.499038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.642 [2024-11-10 05:18:41.499068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:16:48.642 [2024-11-10 05:18:41.499077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.797 ms 00:16:48.642 [2024-11-10 05:18:41.499086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.642 [2024-11-10 05:18:41.502305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.642 [2024-11-10 05:18:41.502339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:48.642 [2024-11-10 05:18:41.502349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.166 ms 00:16:48.642 [2024-11-10 05:18:41.502360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.642 [2024-11-10 05:18:41.502408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.642 [2024-11-10 05:18:41.502419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:48.642 [2024-11-10 05:18:41.502428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:48.642 [2024-11-10 05:18:41.502440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.642 [2024-11-10 05:18:41.502509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.642 [2024-11-10 05:18:41.502520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:48.642 [2024-11-10 05:18:41.502527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:16:48.642 [2024-11-10 05:18:41.502536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.642 [2024-11-10 05:18:41.503387] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:48.642 [2024-11-10 05:18:41.504361] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2287.279 ms, result 0 00:16:48.642 [2024-11-10 05:18:41.505084] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ap{ 00:16:48.642 "name": "ftl0", 00:16:48.642 "uuid": "cc1e5a32-ec08-4380-949d-289ffcdd7e88" 00:16:48.642 } 00:16:48.642 p_thread 00:16:48.642 05:18:41 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:16:48.642 05:18:41 ftl.ftl_trim -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:16:48.642 05:18:41 ftl.ftl_trim -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:48.642 05:18:41 ftl.ftl_trim -- common/autotest_common.sh@901 -- # local i 00:16:48.642 05:18:41 ftl.ftl_trim -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:48.642 05:18:41 ftl.ftl_trim -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:48.642 05:18:41 ftl.ftl_trim -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:48.642 05:18:41 ftl.ftl_trim -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:16:48.900 [ 00:16:48.900 { 00:16:48.900 "name": "ftl0", 00:16:48.900 "aliases": [ 00:16:48.900 "cc1e5a32-ec08-4380-949d-289ffcdd7e88" 00:16:48.900 ], 00:16:48.900 "product_name": "FTL disk", 00:16:48.900 "block_size": 4096, 00:16:48.900 "num_blocks": 23592960, 00:16:48.900 "uuid": "cc1e5a32-ec08-4380-949d-289ffcdd7e88", 00:16:48.900 "assigned_rate_limits": { 00:16:48.900 "rw_ios_per_sec": 0, 00:16:48.900 "rw_mbytes_per_sec": 0, 00:16:48.900 "r_mbytes_per_sec": 0, 00:16:48.900 "w_mbytes_per_sec": 0 00:16:48.900 }, 00:16:48.901 "claimed": false, 00:16:48.901 "zoned": false, 00:16:48.901 "supported_io_types": { 00:16:48.901 "read": true, 00:16:48.901 "write": true, 00:16:48.901 "unmap": true, 00:16:48.901 "flush": true, 00:16:48.901 "reset": false, 00:16:48.901 "nvme_admin": false, 00:16:48.901 "nvme_io": false, 00:16:48.901 "nvme_io_md": false, 00:16:48.901 "write_zeroes": true, 00:16:48.901 "zcopy": false, 00:16:48.901 "get_zone_info": false, 00:16:48.901 "zone_management": false, 00:16:48.901 "zone_append": false, 00:16:48.901 "compare": false, 00:16:48.901 "compare_and_write": false, 00:16:48.901 "abort": false, 00:16:48.901 "seek_hole": false, 00:16:48.901 "seek_data": false, 00:16:48.901 "copy": false, 00:16:48.901 "nvme_iov_md": false 00:16:48.901 }, 00:16:48.901 "driver_specific": { 00:16:48.901 "ftl": { 00:16:48.901 "base_bdev": "27437359-509a-4c2d-8eeb-a0d0995255a5", 00:16:48.901 "cache": "nvc0n1p0" 00:16:48.901 } 00:16:48.901 } 00:16:48.901 } 00:16:48.901 ] 00:16:48.901 05:18:41 ftl.ftl_trim -- common/autotest_common.sh@907 -- # return 0 00:16:48.901 05:18:41 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:16:48.901 05:18:41 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:16:48.901 05:18:42 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:16:48.901 05:18:42 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:16:49.160 05:18:42 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:16:49.160 { 00:16:49.160 "name": "ftl0", 00:16:49.160 "aliases": [ 00:16:49.160 "cc1e5a32-ec08-4380-949d-289ffcdd7e88" 00:16:49.160 ], 00:16:49.160 "product_name": "FTL disk", 00:16:49.160 "block_size": 4096, 00:16:49.160 "num_blocks": 23592960, 00:16:49.160 "uuid": "cc1e5a32-ec08-4380-949d-289ffcdd7e88", 00:16:49.160 "assigned_rate_limits": { 00:16:49.160 "rw_ios_per_sec": 0, 00:16:49.160 "rw_mbytes_per_sec": 0, 00:16:49.160 "r_mbytes_per_sec": 0, 00:16:49.160 "w_mbytes_per_sec": 0 00:16:49.160 }, 00:16:49.160 "claimed": false, 00:16:49.160 "zoned": false, 00:16:49.160 "supported_io_types": { 00:16:49.160 "read": true, 00:16:49.160 "write": true, 00:16:49.160 "unmap": true, 00:16:49.160 "flush": true, 00:16:49.160 "reset": false, 00:16:49.160 "nvme_admin": false, 00:16:49.160 "nvme_io": false, 00:16:49.160 "nvme_io_md": false, 00:16:49.160 "write_zeroes": true, 00:16:49.160 "zcopy": false, 00:16:49.160 "get_zone_info": false, 00:16:49.160 "zone_management": false, 00:16:49.160 "zone_append": false, 00:16:49.160 "compare": false, 00:16:49.160 "compare_and_write": false, 00:16:49.160 "abort": false, 00:16:49.160 "seek_hole": false, 00:16:49.160 "seek_data": false, 00:16:49.160 "copy": false, 00:16:49.160 "nvme_iov_md": false 00:16:49.160 }, 00:16:49.160 "driver_specific": { 00:16:49.160 "ftl": { 00:16:49.160 "base_bdev": "27437359-509a-4c2d-8eeb-a0d0995255a5", 00:16:49.160 "cache": "nvc0n1p0" 00:16:49.160 } 00:16:49.160 } 00:16:49.160 } 00:16:49.160 ]' 00:16:49.160 05:18:42 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:16:49.160 05:18:42 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:16:49.160 05:18:42 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:16:49.420 [2024-11-10 05:18:42.547174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.420 [2024-11-10 05:18:42.547222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:49.420 [2024-11-10 05:18:42.547236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:49.420 [2024-11-10 05:18:42.547245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.420 [2024-11-10 05:18:42.547283] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:49.420 [2024-11-10 05:18:42.547681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.420 [2024-11-10 05:18:42.547700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:49.420 [2024-11-10 05:18:42.547709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.384 ms 00:16:49.420 [2024-11-10 05:18:42.547718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.420 [2024-11-10 05:18:42.548237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.420 [2024-11-10 05:18:42.548252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:49.420 [2024-11-10 05:18:42.548262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.484 ms 00:16:49.420 [2024-11-10 05:18:42.548284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.420 [2024-11-10 05:18:42.551916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.420 [2024-11-10 05:18:42.552062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:49.420 [2024-11-10 05:18:42.552077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.608 ms 00:16:49.420 [2024-11-10 05:18:42.552087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.420 [2024-11-10 05:18:42.559032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.420 [2024-11-10 05:18:42.559147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:49.420 [2024-11-10 05:18:42.559161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.906 ms 00:16:49.420 [2024-11-10 05:18:42.559174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.420 [2024-11-10 05:18:42.560797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.420 [2024-11-10 05:18:42.560832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:49.420 [2024-11-10 05:18:42.560841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.541 ms 00:16:49.420 [2024-11-10 05:18:42.560851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.420 [2024-11-10 05:18:42.565070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.420 [2024-11-10 05:18:42.565106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:49.420 [2024-11-10 05:18:42.565116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.184 ms 00:16:49.420 [2024-11-10 05:18:42.565126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.420 [2024-11-10 05:18:42.565287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.420 [2024-11-10 05:18:42.565306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:49.420 [2024-11-10 05:18:42.565318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:16:49.420 [2024-11-10 05:18:42.565328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.420 [2024-11-10 05:18:42.566933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.420 [2024-11-10 05:18:42.567059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:49.420 [2024-11-10 05:18:42.567073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.578 ms 00:16:49.420 [2024-11-10 05:18:42.567086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.420 [2024-11-10 05:18:42.568433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.420 [2024-11-10 05:18:42.568467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:49.420 [2024-11-10 05:18:42.568476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.308 ms 00:16:49.420 [2024-11-10 05:18:42.568486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.420 [2024-11-10 05:18:42.569462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.420 [2024-11-10 05:18:42.569494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:49.420 [2024-11-10 05:18:42.569502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.931 ms 00:16:49.420 [2024-11-10 05:18:42.569511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.420 [2024-11-10 05:18:42.570665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.420 [2024-11-10 05:18:42.570771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:49.420 [2024-11-10 05:18:42.570783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.072 ms 00:16:49.420 [2024-11-10 05:18:42.570793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.420 [2024-11-10 05:18:42.570831] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:49.420 [2024-11-10 05:18:42.570847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:49.420 [2024-11-10 05:18:42.570857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:49.420 [2024-11-10 05:18:42.570871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:49.420 [2024-11-10 05:18:42.570879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:49.420 [2024-11-10 05:18:42.570888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:49.420 [2024-11-10 05:18:42.570896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:49.420 [2024-11-10 05:18:42.570906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:49.420 [2024-11-10 05:18:42.570914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:49.420 [2024-11-10 05:18:42.570923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:49.420 [2024-11-10 05:18:42.570931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:49.420 [2024-11-10 05:18:42.570940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:49.420 [2024-11-10 05:18:42.570947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:49.420 [2024-11-10 05:18:42.570958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:49.420 [2024-11-10 05:18:42.570965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:49.420 [2024-11-10 05:18:42.570974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:49.420 [2024-11-10 05:18:42.570982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:49.421 [2024-11-10 05:18:42.571007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:49.421 [2024-11-10 05:18:42.571015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:49.421 [2024-11-10 05:18:42.571026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:49.421 [2024-11-10 05:18:42.571033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:49.421 [2024-11-10 05:18:42.571042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:49.421 [2024-11-10 05:18:42.571050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:49.421 [2024-11-10 05:18:42.571059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:49.421 [2024-11-10 05:18:42.571067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:49.421 [2024-11-10 05:18:42.571076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:49.421 [2024-11-10 05:18:42.571083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:49.421 [2024-11-10 05:18:42.571092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:49.421 [2024-11-10 05:18:42.571101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:49.421 [2024-11-10 05:18:42.571124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:49.421 [2024-11-10 05:18:42.571131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:49.421 [2024-11-10 05:18:42.571141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:49.421 [2024-11-10 05:18:42.571149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:49.421 [2024-11-10 05:18:42.571158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:49.421 [2024-11-10 05:18:42.571165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:49.421 [2024-11-10 05:18:42.571176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:49.421 [2024-11-10 05:18:42.571184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:49.421 [2024-11-10 05:18:42.571194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:49.421 [2024-11-10 05:18:42.571206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:49.421 [2024-11-10 05:18:42.571215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:49.421 [2024-11-10 05:18:42.571222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:49.421 [2024-11-10 05:18:42.571231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:49.421 [2024-11-10 05:18:42.571238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:49.421 [2024-11-10 05:18:42.571247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:49.421 [2024-11-10 05:18:42.571254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:49.421 [2024-11-10 05:18:42.571262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:49.421 [2024-11-10 05:18:42.571270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:49.421 [2024-11-10 05:18:42.571279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:49.421 [2024-11-10 05:18:42.571286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:49.421 [2024-11-10 05:18:42.571295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:49.421 [2024-11-10 05:18:42.571302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:49.421 [2024-11-10 05:18:42.571313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:49.421 [2024-11-10 05:18:42.571320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:49.421 [2024-11-10 05:18:42.571329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:49.421 [2024-11-10 05:18:42.571336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:49.421 [2024-11-10 05:18:42.571347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:49.421 [2024-11-10 05:18:42.571354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:49.421 [2024-11-10 05:18:42.571363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:49.421 [2024-11-10 05:18:42.571370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:49.421 [2024-11-10 05:18:42.571379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:49.421 [2024-11-10 05:18:42.571386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:49.421 [2024-11-10 05:18:42.571395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:49.421 [2024-11-10 05:18:42.571402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:49.421 [2024-11-10 05:18:42.571411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:49.421 [2024-11-10 05:18:42.571418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:49.421 [2024-11-10 05:18:42.571427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:49.421 [2024-11-10 05:18:42.571434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:49.421 [2024-11-10 05:18:42.571445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:49.421 [2024-11-10 05:18:42.571452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:49.421 [2024-11-10 05:18:42.571461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:49.421 [2024-11-10 05:18:42.571470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:49.421 [2024-11-10 05:18:42.571479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:49.421 [2024-11-10 05:18:42.571486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:49.421 [2024-11-10 05:18:42.571495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:49.421 [2024-11-10 05:18:42.571502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:49.421 [2024-11-10 05:18:42.571512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:49.421 [2024-11-10 05:18:42.571519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:49.421 [2024-11-10 05:18:42.571528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:49.421 [2024-11-10 05:18:42.571535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:49.421 [2024-11-10 05:18:42.571544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:49.421 [2024-11-10 05:18:42.571552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:49.421 [2024-11-10 05:18:42.571561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:49.421 [2024-11-10 05:18:42.571568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:49.421 [2024-11-10 05:18:42.571580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:49.421 [2024-11-10 05:18:42.571587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:49.421 [2024-11-10 05:18:42.571596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:49.421 [2024-11-10 05:18:42.571604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:49.421 [2024-11-10 05:18:42.571613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:49.421 [2024-11-10 05:18:42.571620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:49.421 [2024-11-10 05:18:42.571629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:49.421 [2024-11-10 05:18:42.571636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:49.421 [2024-11-10 05:18:42.571645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:49.421 [2024-11-10 05:18:42.571653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:49.421 [2024-11-10 05:18:42.571661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:49.421 [2024-11-10 05:18:42.571668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:49.421 [2024-11-10 05:18:42.571677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:49.421 [2024-11-10 05:18:42.571685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:49.421 [2024-11-10 05:18:42.571694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:49.421 [2024-11-10 05:18:42.571701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:49.421 [2024-11-10 05:18:42.571711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:49.421 [2024-11-10 05:18:42.571718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:49.421 [2024-11-10 05:18:42.571736] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:49.421 [2024-11-10 05:18:42.571745] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: cc1e5a32-ec08-4380-949d-289ffcdd7e88 00:16:49.421 [2024-11-10 05:18:42.571764] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:49.421 [2024-11-10 05:18:42.571771] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:49.421 [2024-11-10 05:18:42.571780] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:49.421 [2024-11-10 05:18:42.571787] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:49.421 [2024-11-10 05:18:42.571797] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:49.422 [2024-11-10 05:18:42.571807] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:49.422 [2024-11-10 05:18:42.571816] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:49.422 [2024-11-10 05:18:42.571822] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:49.422 [2024-11-10 05:18:42.571830] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:49.422 [2024-11-10 05:18:42.571837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.422 [2024-11-10 05:18:42.571845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:49.422 [2024-11-10 05:18:42.571861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.006 ms 00:16:49.422 [2024-11-10 05:18:42.571872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.422 [2024-11-10 05:18:42.573292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.422 [2024-11-10 05:18:42.573315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:49.422 [2024-11-10 05:18:42.573323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.383 ms 00:16:49.422 [2024-11-10 05:18:42.573335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.422 [2024-11-10 05:18:42.573419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.422 [2024-11-10 05:18:42.573430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:49.422 [2024-11-10 05:18:42.573438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:16:49.422 [2024-11-10 05:18:42.573446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.422 [2024-11-10 05:18:42.578467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:49.422 [2024-11-10 05:18:42.578574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:49.422 [2024-11-10 05:18:42.578626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:49.422 [2024-11-10 05:18:42.578653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.422 [2024-11-10 05:18:42.578771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:49.422 [2024-11-10 05:18:42.578807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:49.422 [2024-11-10 05:18:42.578933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:49.422 [2024-11-10 05:18:42.578960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.422 [2024-11-10 05:18:42.579040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:49.422 [2024-11-10 05:18:42.579069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:49.422 [2024-11-10 05:18:42.579090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:49.422 [2024-11-10 05:18:42.579138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.422 [2024-11-10 05:18:42.579204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:49.422 [2024-11-10 05:18:42.579231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:49.422 [2024-11-10 05:18:42.579275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:49.422 [2024-11-10 05:18:42.579333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.422 [2024-11-10 05:18:42.588220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:49.422 [2024-11-10 05:18:42.588367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:49.422 [2024-11-10 05:18:42.588420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:49.422 [2024-11-10 05:18:42.588437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.422 [2024-11-10 05:18:42.595698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:49.422 [2024-11-10 05:18:42.595736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:49.422 [2024-11-10 05:18:42.595747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:49.422 [2024-11-10 05:18:42.595759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.422 [2024-11-10 05:18:42.595822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:49.422 [2024-11-10 05:18:42.595834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:49.422 [2024-11-10 05:18:42.595842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:49.422 [2024-11-10 05:18:42.595859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.422 [2024-11-10 05:18:42.595923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:49.422 [2024-11-10 05:18:42.595934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:49.422 [2024-11-10 05:18:42.595942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:49.422 [2024-11-10 05:18:42.595951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.422 [2024-11-10 05:18:42.596036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:49.422 [2024-11-10 05:18:42.596060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:49.422 [2024-11-10 05:18:42.596069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:49.422 [2024-11-10 05:18:42.596078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.422 [2024-11-10 05:18:42.596134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:49.422 [2024-11-10 05:18:42.596147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:49.422 [2024-11-10 05:18:42.596155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:49.422 [2024-11-10 05:18:42.596165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.422 [2024-11-10 05:18:42.596209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:49.422 [2024-11-10 05:18:42.596220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:49.422 [2024-11-10 05:18:42.596228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:49.422 [2024-11-10 05:18:42.596237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.422 [2024-11-10 05:18:42.596291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:49.422 [2024-11-10 05:18:42.596302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:49.422 [2024-11-10 05:18:42.596310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:49.422 [2024-11-10 05:18:42.596320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.422 [2024-11-10 05:18:42.596479] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 49.299 ms, result 0 00:16:49.422 true 00:16:49.422 05:18:42 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 85314 00:16:49.422 05:18:42 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 85314 ']' 00:16:49.422 05:18:42 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 85314 00:16:49.422 05:18:42 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:16:49.422 05:18:42 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:49.422 05:18:42 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85314 00:16:49.422 killing process with pid 85314 00:16:49.422 05:18:42 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:49.422 05:18:42 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:49.422 05:18:42 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85314' 00:16:49.422 05:18:42 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 85314 00:16:49.422 05:18:42 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 85314 00:16:54.686 05:18:47 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:16:55.622 65536+0 records in 00:16:55.622 65536+0 records out 00:16:55.622 268435456 bytes (268 MB, 256 MiB) copied, 0.81861 s, 328 MB/s 00:16:55.622 05:18:48 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:55.622 [2024-11-10 05:18:48.649196] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:55.622 [2024-11-10 05:18:48.649330] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85480 ] 00:16:55.622 [2024-11-10 05:18:48.797217] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:55.622 [2024-11-10 05:18:48.839237] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:55.883 [2024-11-10 05:18:48.931259] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:55.883 [2024-11-10 05:18:48.931322] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:55.883 [2024-11-10 05:18:49.086534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.883 [2024-11-10 05:18:49.086575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:55.883 [2024-11-10 05:18:49.086589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:55.883 [2024-11-10 05:18:49.086597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.883 [2024-11-10 05:18:49.088867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.883 [2024-11-10 05:18:49.088898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:55.883 [2024-11-10 05:18:49.088910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.254 ms 00:16:55.883 [2024-11-10 05:18:49.088917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.883 [2024-11-10 05:18:49.089001] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:55.883 [2024-11-10 05:18:49.089450] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:55.883 [2024-11-10 05:18:49.089492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.883 [2024-11-10 05:18:49.089506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:55.883 [2024-11-10 05:18:49.089520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.510 ms 00:16:55.883 [2024-11-10 05:18:49.089528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.883 [2024-11-10 05:18:49.091041] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:55.883 [2024-11-10 05:18:49.093758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.883 [2024-11-10 05:18:49.093793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:55.883 [2024-11-10 05:18:49.093805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.719 ms 00:16:55.883 [2024-11-10 05:18:49.093815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.884 [2024-11-10 05:18:49.093880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.884 [2024-11-10 05:18:49.093890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:55.884 [2024-11-10 05:18:49.093902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:16:55.884 [2024-11-10 05:18:49.093909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.884 [2024-11-10 05:18:49.098916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.884 [2024-11-10 05:18:49.098941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:55.884 [2024-11-10 05:18:49.098951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.970 ms 00:16:55.884 [2024-11-10 05:18:49.098963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.884 [2024-11-10 05:18:49.099078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.884 [2024-11-10 05:18:49.099091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:55.884 [2024-11-10 05:18:49.099100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:16:55.884 [2024-11-10 05:18:49.099110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.884 [2024-11-10 05:18:49.099137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.884 [2024-11-10 05:18:49.099150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:55.884 [2024-11-10 05:18:49.099158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:55.884 [2024-11-10 05:18:49.099169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.884 [2024-11-10 05:18:49.099194] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:55.884 [2024-11-10 05:18:49.100620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.884 [2024-11-10 05:18:49.100645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:55.884 [2024-11-10 05:18:49.100654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.432 ms 00:16:55.884 [2024-11-10 05:18:49.100661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.884 [2024-11-10 05:18:49.100702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.884 [2024-11-10 05:18:49.100715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:55.884 [2024-11-10 05:18:49.100725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:55.884 [2024-11-10 05:18:49.100733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.884 [2024-11-10 05:18:49.100754] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:55.884 [2024-11-10 05:18:49.100777] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:16:55.884 [2024-11-10 05:18:49.100811] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:55.884 [2024-11-10 05:18:49.100827] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:16:55.884 [2024-11-10 05:18:49.100933] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:55.884 [2024-11-10 05:18:49.100943] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:55.884 [2024-11-10 05:18:49.100953] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:55.884 [2024-11-10 05:18:49.100963] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:55.884 [2024-11-10 05:18:49.100972] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:55.884 [2024-11-10 05:18:49.100979] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:55.884 [2024-11-10 05:18:49.100987] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:55.884 [2024-11-10 05:18:49.101017] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:55.884 [2024-11-10 05:18:49.101024] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:55.884 [2024-11-10 05:18:49.101033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.884 [2024-11-10 05:18:49.101043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:55.884 [2024-11-10 05:18:49.101053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.280 ms 00:16:55.884 [2024-11-10 05:18:49.101059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.884 [2024-11-10 05:18:49.101146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.884 [2024-11-10 05:18:49.101156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:55.884 [2024-11-10 05:18:49.101163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:16:55.884 [2024-11-10 05:18:49.101174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.884 [2024-11-10 05:18:49.101273] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:55.884 [2024-11-10 05:18:49.101290] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:55.884 [2024-11-10 05:18:49.101300] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:55.884 [2024-11-10 05:18:49.101312] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:55.884 [2024-11-10 05:18:49.101321] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:55.884 [2024-11-10 05:18:49.101329] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:55.884 [2024-11-10 05:18:49.101337] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:55.884 [2024-11-10 05:18:49.101345] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:55.884 [2024-11-10 05:18:49.101354] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:55.884 [2024-11-10 05:18:49.101363] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:55.884 [2024-11-10 05:18:49.101372] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:55.884 [2024-11-10 05:18:49.101380] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:55.884 [2024-11-10 05:18:49.101387] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:55.884 [2024-11-10 05:18:49.101395] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:55.884 [2024-11-10 05:18:49.101402] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:55.884 [2024-11-10 05:18:49.101410] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:55.884 [2024-11-10 05:18:49.101419] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:55.884 [2024-11-10 05:18:49.101427] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:55.884 [2024-11-10 05:18:49.101434] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:55.884 [2024-11-10 05:18:49.101444] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:55.884 [2024-11-10 05:18:49.101452] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:55.884 [2024-11-10 05:18:49.101460] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:55.884 [2024-11-10 05:18:49.101467] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:55.884 [2024-11-10 05:18:49.101475] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:55.884 [2024-11-10 05:18:49.101483] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:55.884 [2024-11-10 05:18:49.101494] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:55.884 [2024-11-10 05:18:49.101501] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:55.884 [2024-11-10 05:18:49.101508] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:55.884 [2024-11-10 05:18:49.101516] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:55.884 [2024-11-10 05:18:49.101524] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:55.884 [2024-11-10 05:18:49.101532] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:55.884 [2024-11-10 05:18:49.101539] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:55.884 [2024-11-10 05:18:49.101547] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:55.884 [2024-11-10 05:18:49.101554] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:55.884 [2024-11-10 05:18:49.101562] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:55.884 [2024-11-10 05:18:49.101570] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:55.884 [2024-11-10 05:18:49.101577] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:55.884 [2024-11-10 05:18:49.101585] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:55.884 [2024-11-10 05:18:49.101592] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:55.884 [2024-11-10 05:18:49.101600] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:55.884 [2024-11-10 05:18:49.101608] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:55.884 [2024-11-10 05:18:49.101616] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:55.884 [2024-11-10 05:18:49.101624] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:55.884 [2024-11-10 05:18:49.101632] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:55.884 [2024-11-10 05:18:49.101641] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:55.884 [2024-11-10 05:18:49.101647] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:55.884 [2024-11-10 05:18:49.101654] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:55.884 [2024-11-10 05:18:49.101661] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:55.884 [2024-11-10 05:18:49.101667] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:55.884 [2024-11-10 05:18:49.101674] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:55.884 [2024-11-10 05:18:49.101681] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:55.884 [2024-11-10 05:18:49.101689] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:55.884 [2024-11-10 05:18:49.101696] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:55.884 [2024-11-10 05:18:49.101704] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:55.884 [2024-11-10 05:18:49.101717] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:55.885 [2024-11-10 05:18:49.101725] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:55.885 [2024-11-10 05:18:49.101732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:55.885 [2024-11-10 05:18:49.101740] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:55.885 [2024-11-10 05:18:49.101747] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:55.885 [2024-11-10 05:18:49.101755] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:55.885 [2024-11-10 05:18:49.101762] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:55.885 [2024-11-10 05:18:49.101769] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:55.885 [2024-11-10 05:18:49.101776] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:55.885 [2024-11-10 05:18:49.101782] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:55.885 [2024-11-10 05:18:49.101790] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:55.885 [2024-11-10 05:18:49.101797] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:55.885 [2024-11-10 05:18:49.101804] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:55.885 [2024-11-10 05:18:49.101811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:55.885 [2024-11-10 05:18:49.101818] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:55.885 [2024-11-10 05:18:49.101825] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:55.885 [2024-11-10 05:18:49.101836] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:55.885 [2024-11-10 05:18:49.101845] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:55.885 [2024-11-10 05:18:49.101852] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:55.885 [2024-11-10 05:18:49.101860] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:55.885 [2024-11-10 05:18:49.101867] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:55.885 [2024-11-10 05:18:49.101877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.885 [2024-11-10 05:18:49.101888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:55.885 [2024-11-10 05:18:49.101897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.672 ms 00:16:55.885 [2024-11-10 05:18:49.101904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.145 [2024-11-10 05:18:49.122237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.145 [2024-11-10 05:18:49.122273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:56.145 [2024-11-10 05:18:49.122285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.281 ms 00:16:56.145 [2024-11-10 05:18:49.122294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.145 [2024-11-10 05:18:49.122432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.145 [2024-11-10 05:18:49.122444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:56.145 [2024-11-10 05:18:49.122453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:16:56.145 [2024-11-10 05:18:49.122465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.145 [2024-11-10 05:18:49.132043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.145 [2024-11-10 05:18:49.132077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:56.145 [2024-11-10 05:18:49.132090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.557 ms 00:16:56.145 [2024-11-10 05:18:49.132106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.145 [2024-11-10 05:18:49.132161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.145 [2024-11-10 05:18:49.132173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:56.145 [2024-11-10 05:18:49.132191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:56.145 [2024-11-10 05:18:49.132202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.145 [2024-11-10 05:18:49.132577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.145 [2024-11-10 05:18:49.132606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:56.145 [2024-11-10 05:18:49.132619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.352 ms 00:16:56.145 [2024-11-10 05:18:49.132629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.145 [2024-11-10 05:18:49.132826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.145 [2024-11-10 05:18:49.132847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:56.145 [2024-11-10 05:18:49.132860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:16:56.145 [2024-11-10 05:18:49.132879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.145 [2024-11-10 05:18:49.138343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.145 [2024-11-10 05:18:49.138375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:56.145 [2024-11-10 05:18:49.138385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.434 ms 00:16:56.145 [2024-11-10 05:18:49.138392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.145 [2024-11-10 05:18:49.141188] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:16:56.145 [2024-11-10 05:18:49.141220] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:56.145 [2024-11-10 05:18:49.141237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.145 [2024-11-10 05:18:49.141245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:56.145 [2024-11-10 05:18:49.141253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.767 ms 00:16:56.145 [2024-11-10 05:18:49.141261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.145 [2024-11-10 05:18:49.155856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.145 [2024-11-10 05:18:49.155892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:56.145 [2024-11-10 05:18:49.155902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.552 ms 00:16:56.145 [2024-11-10 05:18:49.155910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.145 [2024-11-10 05:18:49.157813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.145 [2024-11-10 05:18:49.157841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:56.145 [2024-11-10 05:18:49.157850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.828 ms 00:16:56.145 [2024-11-10 05:18:49.157857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.145 [2024-11-10 05:18:49.159596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.145 [2024-11-10 05:18:49.159620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:56.145 [2024-11-10 05:18:49.159629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.702 ms 00:16:56.145 [2024-11-10 05:18:49.159642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.146 [2024-11-10 05:18:49.159982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.146 [2024-11-10 05:18:49.160012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:56.146 [2024-11-10 05:18:49.160021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.280 ms 00:16:56.146 [2024-11-10 05:18:49.160029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.146 [2024-11-10 05:18:49.176637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.146 [2024-11-10 05:18:49.176672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:56.146 [2024-11-10 05:18:49.176683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.572 ms 00:16:56.146 [2024-11-10 05:18:49.176691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.146 [2024-11-10 05:18:49.184284] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:56.146 [2024-11-10 05:18:49.198561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.146 [2024-11-10 05:18:49.198589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:56.146 [2024-11-10 05:18:49.198601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.812 ms 00:16:56.146 [2024-11-10 05:18:49.198609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.146 [2024-11-10 05:18:49.198703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.146 [2024-11-10 05:18:49.198714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:56.146 [2024-11-10 05:18:49.198723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:56.146 [2024-11-10 05:18:49.198731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.146 [2024-11-10 05:18:49.198780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.146 [2024-11-10 05:18:49.198795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:56.146 [2024-11-10 05:18:49.198807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:16:56.146 [2024-11-10 05:18:49.198815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.146 [2024-11-10 05:18:49.198834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.146 [2024-11-10 05:18:49.198842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:56.146 [2024-11-10 05:18:49.198849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:56.146 [2024-11-10 05:18:49.198857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.146 [2024-11-10 05:18:49.198893] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:56.146 [2024-11-10 05:18:49.198905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.146 [2024-11-10 05:18:49.198914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:56.146 [2024-11-10 05:18:49.198923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:16:56.146 [2024-11-10 05:18:49.198930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.146 [2024-11-10 05:18:49.202908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.146 [2024-11-10 05:18:49.202937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:56.146 [2024-11-10 05:18:49.202947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.959 ms 00:16:56.146 [2024-11-10 05:18:49.202955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.146 [2024-11-10 05:18:49.203052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.146 [2024-11-10 05:18:49.203063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:56.146 [2024-11-10 05:18:49.203074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:16:56.146 [2024-11-10 05:18:49.203089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.146 [2024-11-10 05:18:49.203907] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:56.146 [2024-11-10 05:18:49.204894] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 117.101 ms, result 0 00:16:56.146 [2024-11-10 05:18:49.206609] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:56.146 [2024-11-10 05:18:49.217294] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:57.082  [2024-11-10T05:18:51.256Z] Copying: 17/256 [MB] (17 MBps) [2024-11-10T05:18:52.639Z] Copying: 47/256 [MB] (30 MBps) [2024-11-10T05:18:53.579Z] Copying: 67/256 [MB] (20 MBps) [2024-11-10T05:18:54.521Z] Copying: 86/256 [MB] (18 MBps) [2024-11-10T05:18:55.462Z] Copying: 106/256 [MB] (20 MBps) [2024-11-10T05:18:56.406Z] Copying: 125/256 [MB] (19 MBps) [2024-11-10T05:18:57.349Z] Copying: 141/256 [MB] (15 MBps) [2024-11-10T05:18:58.292Z] Copying: 155/256 [MB] (14 MBps) [2024-11-10T05:18:59.235Z] Copying: 170/256 [MB] (14 MBps) [2024-11-10T05:19:00.633Z] Copying: 182/256 [MB] (11 MBps) [2024-11-10T05:19:01.576Z] Copying: 198/256 [MB] (16 MBps) [2024-11-10T05:19:02.569Z] Copying: 211/256 [MB] (13 MBps) [2024-11-10T05:19:03.511Z] Copying: 222/256 [MB] (10 MBps) [2024-11-10T05:19:04.454Z] Copying: 234/256 [MB] (11 MBps) [2024-11-10T05:19:05.398Z] Copying: 249600/262144 [kB] (9752 kBps) [2024-11-10T05:19:05.661Z] Copying: 254/256 [MB] (10 MBps) [2024-11-10T05:19:05.661Z] Copying: 256/256 [MB] (average 15 MBps)[2024-11-10 05:19:05.409551] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:12.425 [2024-11-10 05:19:05.411352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.425 [2024-11-10 05:19:05.411399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:12.425 [2024-11-10 05:19:05.411415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:12.425 [2024-11-10 05:19:05.411433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.425 [2024-11-10 05:19:05.411456] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:12.425 [2024-11-10 05:19:05.412181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.425 [2024-11-10 05:19:05.412214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:12.425 [2024-11-10 05:19:05.412226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.710 ms 00:17:12.425 [2024-11-10 05:19:05.412236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.425 [2024-11-10 05:19:05.414980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.425 [2024-11-10 05:19:05.415032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:12.425 [2024-11-10 05:19:05.415044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.715 ms 00:17:12.425 [2024-11-10 05:19:05.415052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.425 [2024-11-10 05:19:05.421938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.425 [2024-11-10 05:19:05.422008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:12.425 [2024-11-10 05:19:05.422019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.867 ms 00:17:12.425 [2024-11-10 05:19:05.422027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.425 [2024-11-10 05:19:05.428949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.425 [2024-11-10 05:19:05.429000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:12.425 [2024-11-10 05:19:05.429011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.879 ms 00:17:12.425 [2024-11-10 05:19:05.429019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.425 [2024-11-10 05:19:05.431790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.425 [2024-11-10 05:19:05.431831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:12.425 [2024-11-10 05:19:05.431841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.707 ms 00:17:12.425 [2024-11-10 05:19:05.431850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.425 [2024-11-10 05:19:05.436923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.425 [2024-11-10 05:19:05.436976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:12.425 [2024-11-10 05:19:05.437007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.999 ms 00:17:12.425 [2024-11-10 05:19:05.437019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.425 [2024-11-10 05:19:05.437151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.425 [2024-11-10 05:19:05.437162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:12.425 [2024-11-10 05:19:05.437171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:17:12.425 [2024-11-10 05:19:05.437179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.425 [2024-11-10 05:19:05.440024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.425 [2024-11-10 05:19:05.440060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:12.425 [2024-11-10 05:19:05.440070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.826 ms 00:17:12.425 [2024-11-10 05:19:05.440078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.425 [2024-11-10 05:19:05.442069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.425 [2024-11-10 05:19:05.442104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:12.425 [2024-11-10 05:19:05.442114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.948 ms 00:17:12.425 [2024-11-10 05:19:05.442123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.425 [2024-11-10 05:19:05.443747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.425 [2024-11-10 05:19:05.443785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:12.425 [2024-11-10 05:19:05.443795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.581 ms 00:17:12.425 [2024-11-10 05:19:05.443803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.425 [2024-11-10 05:19:05.445286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.425 [2024-11-10 05:19:05.445322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:12.425 [2024-11-10 05:19:05.445331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.410 ms 00:17:12.425 [2024-11-10 05:19:05.445338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.425 [2024-11-10 05:19:05.445376] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:12.425 [2024-11-10 05:19:05.445392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:12.425 [2024-11-10 05:19:05.445409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:12.425 [2024-11-10 05:19:05.445418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:12.425 [2024-11-10 05:19:05.445426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:12.425 [2024-11-10 05:19:05.445434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:12.425 [2024-11-10 05:19:05.445441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:12.425 [2024-11-10 05:19:05.445449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:12.425 [2024-11-10 05:19:05.445457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:12.425 [2024-11-10 05:19:05.445465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:12.425 [2024-11-10 05:19:05.445473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:12.425 [2024-11-10 05:19:05.445481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:12.425 [2024-11-10 05:19:05.445488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:12.425 [2024-11-10 05:19:05.445495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:12.425 [2024-11-10 05:19:05.445503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:12.425 [2024-11-10 05:19:05.445510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:12.425 [2024-11-10 05:19:05.445517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:12.425 [2024-11-10 05:19:05.445524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:12.425 [2024-11-10 05:19:05.445531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:12.425 [2024-11-10 05:19:05.445539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:12.425 [2024-11-10 05:19:05.445546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:12.425 [2024-11-10 05:19:05.445553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:12.425 [2024-11-10 05:19:05.445560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:12.425 [2024-11-10 05:19:05.445567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:12.425 [2024-11-10 05:19:05.445575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:12.425 [2024-11-10 05:19:05.445582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:12.425 [2024-11-10 05:19:05.445590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:12.425 [2024-11-10 05:19:05.445627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:12.425 [2024-11-10 05:19:05.445635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:12.426 [2024-11-10 05:19:05.445643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:12.426 [2024-11-10 05:19:05.445651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:12.426 [2024-11-10 05:19:05.445661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:12.426 [2024-11-10 05:19:05.445669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:12.426 [2024-11-10 05:19:05.445677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:12.426 [2024-11-10 05:19:05.445684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:12.426 [2024-11-10 05:19:05.445692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:12.426 [2024-11-10 05:19:05.445699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:12.426 [2024-11-10 05:19:05.445707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:12.426 [2024-11-10 05:19:05.445715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:12.426 [2024-11-10 05:19:05.445723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:12.426 [2024-11-10 05:19:05.445730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:12.426 [2024-11-10 05:19:05.445738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:12.426 [2024-11-10 05:19:05.445746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:12.426 [2024-11-10 05:19:05.445753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:12.426 [2024-11-10 05:19:05.445761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:12.426 [2024-11-10 05:19:05.445769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:12.426 [2024-11-10 05:19:05.445777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:12.426 [2024-11-10 05:19:05.445784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:12.426 [2024-11-10 05:19:05.445791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:12.426 [2024-11-10 05:19:05.445799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:12.426 [2024-11-10 05:19:05.445806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:12.426 [2024-11-10 05:19:05.445813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:12.426 [2024-11-10 05:19:05.445821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:12.426 [2024-11-10 05:19:05.445829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:12.426 [2024-11-10 05:19:05.445836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:12.426 [2024-11-10 05:19:05.445843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:12.426 [2024-11-10 05:19:05.445851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:12.426 [2024-11-10 05:19:05.445859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:12.426 [2024-11-10 05:19:05.445867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:12.426 [2024-11-10 05:19:05.445874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:12.426 [2024-11-10 05:19:05.445882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:12.426 [2024-11-10 05:19:05.445889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:12.426 [2024-11-10 05:19:05.445896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:12.426 [2024-11-10 05:19:05.445905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:12.426 [2024-11-10 05:19:05.445913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:12.426 [2024-11-10 05:19:05.445921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:12.426 [2024-11-10 05:19:05.445928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:12.426 [2024-11-10 05:19:05.445936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:12.426 [2024-11-10 05:19:05.445943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:12.426 [2024-11-10 05:19:05.445951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:12.426 [2024-11-10 05:19:05.445959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:12.426 [2024-11-10 05:19:05.445967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:12.426 [2024-11-10 05:19:05.445974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:12.426 [2024-11-10 05:19:05.445982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:12.426 [2024-11-10 05:19:05.446004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:12.426 [2024-11-10 05:19:05.446013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:12.426 [2024-11-10 05:19:05.446020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:12.426 [2024-11-10 05:19:05.446028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:12.426 [2024-11-10 05:19:05.446035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:12.426 [2024-11-10 05:19:05.446043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:12.426 [2024-11-10 05:19:05.446050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:12.426 [2024-11-10 05:19:05.446058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:12.426 [2024-11-10 05:19:05.446066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:12.426 [2024-11-10 05:19:05.446073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:12.426 [2024-11-10 05:19:05.446081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:12.426 [2024-11-10 05:19:05.446089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:12.426 [2024-11-10 05:19:05.446097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:12.426 [2024-11-10 05:19:05.446105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:12.426 [2024-11-10 05:19:05.446112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:12.426 [2024-11-10 05:19:05.446119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:12.426 [2024-11-10 05:19:05.446127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:12.426 [2024-11-10 05:19:05.446134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:12.426 [2024-11-10 05:19:05.446142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:12.426 [2024-11-10 05:19:05.446149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:12.426 [2024-11-10 05:19:05.446157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:12.426 [2024-11-10 05:19:05.446170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:12.426 [2024-11-10 05:19:05.446180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:12.426 [2024-11-10 05:19:05.446187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:12.426 [2024-11-10 05:19:05.446195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:12.426 [2024-11-10 05:19:05.446202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:12.426 [2024-11-10 05:19:05.446210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:12.426 [2024-11-10 05:19:05.446226] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:12.426 [2024-11-10 05:19:05.446240] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: cc1e5a32-ec08-4380-949d-289ffcdd7e88 00:17:12.426 [2024-11-10 05:19:05.446249] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:12.426 [2024-11-10 05:19:05.446266] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:12.426 [2024-11-10 05:19:05.446274] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:12.426 [2024-11-10 05:19:05.446282] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:12.426 [2024-11-10 05:19:05.446290] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:12.426 [2024-11-10 05:19:05.446298] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:12.426 [2024-11-10 05:19:05.446305] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:12.426 [2024-11-10 05:19:05.446312] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:12.426 [2024-11-10 05:19:05.446319] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:12.426 [2024-11-10 05:19:05.446326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.426 [2024-11-10 05:19:05.446334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:12.426 [2024-11-10 05:19:05.446343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.951 ms 00:17:12.426 [2024-11-10 05:19:05.446358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.426 [2024-11-10 05:19:05.448529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.426 [2024-11-10 05:19:05.448552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:12.426 [2024-11-10 05:19:05.448564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.151 ms 00:17:12.426 [2024-11-10 05:19:05.448573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.426 [2024-11-10 05:19:05.448692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.426 [2024-11-10 05:19:05.448710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:12.427 [2024-11-10 05:19:05.448723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:17:12.427 [2024-11-10 05:19:05.448732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.427 [2024-11-10 05:19:05.455695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.427 [2024-11-10 05:19:05.455733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:12.427 [2024-11-10 05:19:05.455745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.427 [2024-11-10 05:19:05.455752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.427 [2024-11-10 05:19:05.455840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.427 [2024-11-10 05:19:05.455850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:12.427 [2024-11-10 05:19:05.455866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.427 [2024-11-10 05:19:05.455874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.427 [2024-11-10 05:19:05.455944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.427 [2024-11-10 05:19:05.455954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:12.427 [2024-11-10 05:19:05.455962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.427 [2024-11-10 05:19:05.455969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.427 [2024-11-10 05:19:05.455986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.427 [2024-11-10 05:19:05.456027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:12.427 [2024-11-10 05:19:05.456035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.427 [2024-11-10 05:19:05.456046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.427 [2024-11-10 05:19:05.469615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.427 [2024-11-10 05:19:05.469664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:12.427 [2024-11-10 05:19:05.469676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.427 [2024-11-10 05:19:05.469684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.427 [2024-11-10 05:19:05.480632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.427 [2024-11-10 05:19:05.480680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:12.427 [2024-11-10 05:19:05.480701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.427 [2024-11-10 05:19:05.480709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.427 [2024-11-10 05:19:05.480758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.427 [2024-11-10 05:19:05.480768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:12.427 [2024-11-10 05:19:05.480776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.427 [2024-11-10 05:19:05.480786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.427 [2024-11-10 05:19:05.480819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.427 [2024-11-10 05:19:05.480828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:12.427 [2024-11-10 05:19:05.480837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.427 [2024-11-10 05:19:05.480845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.427 [2024-11-10 05:19:05.480921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.427 [2024-11-10 05:19:05.480931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:12.427 [2024-11-10 05:19:05.480940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.427 [2024-11-10 05:19:05.480948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.427 [2024-11-10 05:19:05.480979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.427 [2024-11-10 05:19:05.481013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:12.427 [2024-11-10 05:19:05.481022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.427 [2024-11-10 05:19:05.481030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.427 [2024-11-10 05:19:05.481083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.427 [2024-11-10 05:19:05.481093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:12.427 [2024-11-10 05:19:05.481102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.427 [2024-11-10 05:19:05.481112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.427 [2024-11-10 05:19:05.481159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.427 [2024-11-10 05:19:05.481170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:12.427 [2024-11-10 05:19:05.481178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.427 [2024-11-10 05:19:05.481187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.427 [2024-11-10 05:19:05.481346] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 69.961 ms, result 0 00:17:13.000 00:17:13.000 00:17:13.000 05:19:06 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=85665 00:17:13.000 05:19:06 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 85665 00:17:13.000 05:19:06 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 85665 ']' 00:17:13.000 05:19:06 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:13.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:13.000 05:19:06 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:13.000 05:19:06 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:13.000 05:19:06 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:13.000 05:19:06 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:13.000 05:19:06 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:17:13.000 [2024-11-10 05:19:06.081469] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:13.000 [2024-11-10 05:19:06.081571] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85665 ] 00:17:13.000 [2024-11-10 05:19:06.223683] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:13.261 [2024-11-10 05:19:06.273578] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:13.834 05:19:06 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:13.834 05:19:06 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:17:13.834 05:19:06 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:14.096 [2024-11-10 05:19:07.109964] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:14.096 [2024-11-10 05:19:07.110031] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:14.096 [2024-11-10 05:19:07.283039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.096 [2024-11-10 05:19:07.283097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:14.096 [2024-11-10 05:19:07.283113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:14.096 [2024-11-10 05:19:07.283123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.096 [2024-11-10 05:19:07.285892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.096 [2024-11-10 05:19:07.285949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:14.096 [2024-11-10 05:19:07.285961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.747 ms 00:17:14.096 [2024-11-10 05:19:07.285971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.096 [2024-11-10 05:19:07.286107] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:14.096 [2024-11-10 05:19:07.286376] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:14.096 [2024-11-10 05:19:07.286393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.096 [2024-11-10 05:19:07.286405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:14.096 [2024-11-10 05:19:07.286418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.302 ms 00:17:14.096 [2024-11-10 05:19:07.286427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.096 [2024-11-10 05:19:07.288453] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:14.096 [2024-11-10 05:19:07.292082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.096 [2024-11-10 05:19:07.292132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:14.096 [2024-11-10 05:19:07.292145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.628 ms 00:17:14.096 [2024-11-10 05:19:07.292154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.096 [2024-11-10 05:19:07.292230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.096 [2024-11-10 05:19:07.292241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:14.096 [2024-11-10 05:19:07.292255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:17:14.096 [2024-11-10 05:19:07.292263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.096 [2024-11-10 05:19:07.300195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.096 [2024-11-10 05:19:07.300232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:14.097 [2024-11-10 05:19:07.300245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.878 ms 00:17:14.097 [2024-11-10 05:19:07.300260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.097 [2024-11-10 05:19:07.300375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.097 [2024-11-10 05:19:07.300386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:14.097 [2024-11-10 05:19:07.300397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:17:14.097 [2024-11-10 05:19:07.300405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.097 [2024-11-10 05:19:07.300435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.097 [2024-11-10 05:19:07.300443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:14.097 [2024-11-10 05:19:07.300453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:14.097 [2024-11-10 05:19:07.300464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.097 [2024-11-10 05:19:07.300491] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:14.097 [2024-11-10 05:19:07.302516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.097 [2024-11-10 05:19:07.302551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:14.097 [2024-11-10 05:19:07.302561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.034 ms 00:17:14.097 [2024-11-10 05:19:07.302570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.097 [2024-11-10 05:19:07.302612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.097 [2024-11-10 05:19:07.302624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:14.097 [2024-11-10 05:19:07.302633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:14.097 [2024-11-10 05:19:07.302642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.097 [2024-11-10 05:19:07.302668] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:14.097 [2024-11-10 05:19:07.302690] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:14.097 [2024-11-10 05:19:07.302735] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:14.097 [2024-11-10 05:19:07.302755] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:14.097 [2024-11-10 05:19:07.302861] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:14.097 [2024-11-10 05:19:07.302875] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:14.097 [2024-11-10 05:19:07.302886] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:14.097 [2024-11-10 05:19:07.302902] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:14.097 [2024-11-10 05:19:07.302912] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:14.097 [2024-11-10 05:19:07.302924] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:14.097 [2024-11-10 05:19:07.302935] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:14.097 [2024-11-10 05:19:07.302945] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:14.097 [2024-11-10 05:19:07.302956] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:14.097 [2024-11-10 05:19:07.302965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.097 [2024-11-10 05:19:07.302975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:14.097 [2024-11-10 05:19:07.302985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.298 ms 00:17:14.097 [2024-11-10 05:19:07.303011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.097 [2024-11-10 05:19:07.303101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.097 [2024-11-10 05:19:07.303110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:14.097 [2024-11-10 05:19:07.303120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:14.097 [2024-11-10 05:19:07.303127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.097 [2024-11-10 05:19:07.303231] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:14.097 [2024-11-10 05:19:07.303246] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:14.097 [2024-11-10 05:19:07.303259] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:14.097 [2024-11-10 05:19:07.303273] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:14.097 [2024-11-10 05:19:07.303286] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:14.097 [2024-11-10 05:19:07.303294] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:14.097 [2024-11-10 05:19:07.303304] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:14.097 [2024-11-10 05:19:07.303313] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:14.097 [2024-11-10 05:19:07.303330] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:14.097 [2024-11-10 05:19:07.303338] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:14.097 [2024-11-10 05:19:07.303347] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:14.097 [2024-11-10 05:19:07.303356] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:14.097 [2024-11-10 05:19:07.303366] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:14.097 [2024-11-10 05:19:07.303373] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:14.097 [2024-11-10 05:19:07.303383] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:14.097 [2024-11-10 05:19:07.303390] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:14.097 [2024-11-10 05:19:07.303399] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:14.097 [2024-11-10 05:19:07.303408] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:14.097 [2024-11-10 05:19:07.303418] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:14.097 [2024-11-10 05:19:07.303425] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:14.097 [2024-11-10 05:19:07.303446] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:14.097 [2024-11-10 05:19:07.303454] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:14.097 [2024-11-10 05:19:07.303463] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:14.097 [2024-11-10 05:19:07.303472] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:14.097 [2024-11-10 05:19:07.303481] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:14.097 [2024-11-10 05:19:07.303489] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:14.097 [2024-11-10 05:19:07.303500] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:14.097 [2024-11-10 05:19:07.303507] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:14.097 [2024-11-10 05:19:07.303517] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:14.097 [2024-11-10 05:19:07.303525] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:14.097 [2024-11-10 05:19:07.303535] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:14.097 [2024-11-10 05:19:07.303542] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:14.097 [2024-11-10 05:19:07.303550] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:14.097 [2024-11-10 05:19:07.303556] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:14.097 [2024-11-10 05:19:07.303565] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:14.097 [2024-11-10 05:19:07.303573] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:14.097 [2024-11-10 05:19:07.303584] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:14.097 [2024-11-10 05:19:07.303590] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:14.097 [2024-11-10 05:19:07.303599] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:14.097 [2024-11-10 05:19:07.303606] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:14.097 [2024-11-10 05:19:07.303615] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:14.097 [2024-11-10 05:19:07.303621] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:14.097 [2024-11-10 05:19:07.303631] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:14.097 [2024-11-10 05:19:07.303637] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:14.097 [2024-11-10 05:19:07.303648] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:14.097 [2024-11-10 05:19:07.303655] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:14.097 [2024-11-10 05:19:07.303664] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:14.097 [2024-11-10 05:19:07.303672] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:14.097 [2024-11-10 05:19:07.303680] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:14.097 [2024-11-10 05:19:07.303687] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:14.097 [2024-11-10 05:19:07.303696] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:14.097 [2024-11-10 05:19:07.303702] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:14.097 [2024-11-10 05:19:07.303718] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:14.097 [2024-11-10 05:19:07.303726] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:14.097 [2024-11-10 05:19:07.303741] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:14.097 [2024-11-10 05:19:07.303750] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:14.097 [2024-11-10 05:19:07.303759] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:14.097 [2024-11-10 05:19:07.303767] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:14.097 [2024-11-10 05:19:07.303776] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:14.097 [2024-11-10 05:19:07.303783] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:14.097 [2024-11-10 05:19:07.303792] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:14.097 [2024-11-10 05:19:07.303800] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:14.098 [2024-11-10 05:19:07.303809] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:14.098 [2024-11-10 05:19:07.303816] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:14.098 [2024-11-10 05:19:07.303826] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:14.098 [2024-11-10 05:19:07.303833] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:14.098 [2024-11-10 05:19:07.303843] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:14.098 [2024-11-10 05:19:07.303850] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:14.098 [2024-11-10 05:19:07.303861] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:14.098 [2024-11-10 05:19:07.303868] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:14.098 [2024-11-10 05:19:07.303879] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:14.098 [2024-11-10 05:19:07.303902] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:14.098 [2024-11-10 05:19:07.303912] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:14.098 [2024-11-10 05:19:07.303918] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:14.098 [2024-11-10 05:19:07.303928] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:14.098 [2024-11-10 05:19:07.303937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.098 [2024-11-10 05:19:07.303948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:14.098 [2024-11-10 05:19:07.303960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.776 ms 00:17:14.098 [2024-11-10 05:19:07.303969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.098 [2024-11-10 05:19:07.317721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.098 [2024-11-10 05:19:07.317762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:14.098 [2024-11-10 05:19:07.317773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.675 ms 00:17:14.098 [2024-11-10 05:19:07.317783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.098 [2024-11-10 05:19:07.317907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.098 [2024-11-10 05:19:07.317928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:14.098 [2024-11-10 05:19:07.317939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:17:14.098 [2024-11-10 05:19:07.317948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.359 [2024-11-10 05:19:07.329789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.359 [2024-11-10 05:19:07.329839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:14.359 [2024-11-10 05:19:07.329849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.820 ms 00:17:14.359 [2024-11-10 05:19:07.329859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.359 [2024-11-10 05:19:07.329923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.359 [2024-11-10 05:19:07.329939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:14.359 [2024-11-10 05:19:07.329955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:14.359 [2024-11-10 05:19:07.329964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.359 [2024-11-10 05:19:07.330525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.359 [2024-11-10 05:19:07.330565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:14.359 [2024-11-10 05:19:07.330577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.539 ms 00:17:14.359 [2024-11-10 05:19:07.330588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.359 [2024-11-10 05:19:07.330740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.359 [2024-11-10 05:19:07.330761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:14.359 [2024-11-10 05:19:07.330776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:17:14.359 [2024-11-10 05:19:07.330787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.359 [2024-11-10 05:19:07.346117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.359 [2024-11-10 05:19:07.346175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:14.359 [2024-11-10 05:19:07.346190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.303 ms 00:17:14.359 [2024-11-10 05:19:07.346203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.359 [2024-11-10 05:19:07.350050] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:14.359 [2024-11-10 05:19:07.350101] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:14.359 [2024-11-10 05:19:07.350121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.359 [2024-11-10 05:19:07.350134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:14.359 [2024-11-10 05:19:07.350145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.767 ms 00:17:14.359 [2024-11-10 05:19:07.350156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.359 [2024-11-10 05:19:07.366986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.359 [2024-11-10 05:19:07.367041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:14.359 [2024-11-10 05:19:07.367053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.767 ms 00:17:14.359 [2024-11-10 05:19:07.367066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.359 [2024-11-10 05:19:07.369967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.359 [2024-11-10 05:19:07.370030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:14.359 [2024-11-10 05:19:07.370041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.810 ms 00:17:14.359 [2024-11-10 05:19:07.370051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.359 [2024-11-10 05:19:07.372460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.359 [2024-11-10 05:19:07.372503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:14.359 [2024-11-10 05:19:07.372513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.357 ms 00:17:14.359 [2024-11-10 05:19:07.372523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.359 [2024-11-10 05:19:07.372862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.359 [2024-11-10 05:19:07.372878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:14.359 [2024-11-10 05:19:07.372888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:17:14.359 [2024-11-10 05:19:07.372898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.359 [2024-11-10 05:19:07.395629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.359 [2024-11-10 05:19:07.395690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:14.359 [2024-11-10 05:19:07.395704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.709 ms 00:17:14.359 [2024-11-10 05:19:07.395717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.359 [2024-11-10 05:19:07.403931] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:14.359 [2024-11-10 05:19:07.422451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.359 [2024-11-10 05:19:07.422491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:14.359 [2024-11-10 05:19:07.422506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.633 ms 00:17:14.359 [2024-11-10 05:19:07.422515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.359 [2024-11-10 05:19:07.422595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.359 [2024-11-10 05:19:07.422607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:14.359 [2024-11-10 05:19:07.422619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:14.359 [2024-11-10 05:19:07.422630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.359 [2024-11-10 05:19:07.422689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.359 [2024-11-10 05:19:07.422697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:14.359 [2024-11-10 05:19:07.422711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:17:14.359 [2024-11-10 05:19:07.422720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.359 [2024-11-10 05:19:07.422753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.359 [2024-11-10 05:19:07.422761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:14.359 [2024-11-10 05:19:07.422776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:14.359 [2024-11-10 05:19:07.422783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.359 [2024-11-10 05:19:07.422822] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:14.359 [2024-11-10 05:19:07.422832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.359 [2024-11-10 05:19:07.422841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:14.359 [2024-11-10 05:19:07.422849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:14.359 [2024-11-10 05:19:07.422858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.359 [2024-11-10 05:19:07.428777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.359 [2024-11-10 05:19:07.428825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:14.359 [2024-11-10 05:19:07.428837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.896 ms 00:17:14.359 [2024-11-10 05:19:07.428847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.359 [2024-11-10 05:19:07.428939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.359 [2024-11-10 05:19:07.428951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:14.360 [2024-11-10 05:19:07.428961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:17:14.360 [2024-11-10 05:19:07.428971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.360 [2024-11-10 05:19:07.430017] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:14.360 [2024-11-10 05:19:07.431299] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 146.640 ms, result 0 00:17:14.360 [2024-11-10 05:19:07.433007] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:14.360 Some configs were skipped because the RPC state that can call them passed over. 00:17:14.360 05:19:07 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:17:14.620 [2024-11-10 05:19:07.609281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.620 [2024-11-10 05:19:07.609314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:14.620 [2024-11-10 05:19:07.609326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.516 ms 00:17:14.621 [2024-11-10 05:19:07.609334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.621 [2024-11-10 05:19:07.609367] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.609 ms, result 0 00:17:14.621 true 00:17:14.621 05:19:07 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:17:14.621 [2024-11-10 05:19:07.810622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.621 [2024-11-10 05:19:07.810680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:14.621 [2024-11-10 05:19:07.810694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.512 ms 00:17:14.621 [2024-11-10 05:19:07.810704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.621 [2024-11-10 05:19:07.810742] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.638 ms, result 0 00:17:14.621 true 00:17:14.621 05:19:07 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 85665 00:17:14.621 05:19:07 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 85665 ']' 00:17:14.621 05:19:07 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 85665 00:17:14.621 05:19:07 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:17:14.621 05:19:07 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:14.621 05:19:07 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85665 00:17:14.883 05:19:07 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:14.883 killing process with pid 85665 00:17:14.884 05:19:07 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:14.884 05:19:07 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85665' 00:17:14.884 05:19:07 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 85665 00:17:14.884 05:19:07 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 85665 00:17:14.884 [2024-11-10 05:19:07.993417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.884 [2024-11-10 05:19:07.993481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:14.884 [2024-11-10 05:19:07.993496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:14.884 [2024-11-10 05:19:07.993508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.884 [2024-11-10 05:19:07.993535] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:14.884 [2024-11-10 05:19:07.994249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.884 [2024-11-10 05:19:07.994284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:14.884 [2024-11-10 05:19:07.994296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.699 ms 00:17:14.884 [2024-11-10 05:19:07.994306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.884 [2024-11-10 05:19:07.994604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.884 [2024-11-10 05:19:07.994621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:14.884 [2024-11-10 05:19:07.994630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:17:14.884 [2024-11-10 05:19:07.994648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.884 [2024-11-10 05:19:07.999278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.884 [2024-11-10 05:19:07.999320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:14.884 [2024-11-10 05:19:07.999330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.609 ms 00:17:14.884 [2024-11-10 05:19:07.999340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.884 [2024-11-10 05:19:08.006346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.884 [2024-11-10 05:19:08.006392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:14.884 [2024-11-10 05:19:08.006403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.964 ms 00:17:14.884 [2024-11-10 05:19:08.006414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.884 [2024-11-10 05:19:08.009457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.884 [2024-11-10 05:19:08.009505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:14.884 [2024-11-10 05:19:08.009515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.985 ms 00:17:14.884 [2024-11-10 05:19:08.009525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.884 [2024-11-10 05:19:08.014238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.884 [2024-11-10 05:19:08.014287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:14.884 [2024-11-10 05:19:08.014297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.666 ms 00:17:14.884 [2024-11-10 05:19:08.014307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.884 [2024-11-10 05:19:08.014447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.884 [2024-11-10 05:19:08.014460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:14.884 [2024-11-10 05:19:08.014469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:17:14.884 [2024-11-10 05:19:08.014479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.884 [2024-11-10 05:19:08.016869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.884 [2024-11-10 05:19:08.016920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:14.884 [2024-11-10 05:19:08.016929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.356 ms 00:17:14.884 [2024-11-10 05:19:08.016942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.884 [2024-11-10 05:19:08.019138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.884 [2024-11-10 05:19:08.019183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:14.884 [2024-11-10 05:19:08.019193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.152 ms 00:17:14.884 [2024-11-10 05:19:08.019203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.884 [2024-11-10 05:19:08.020772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.884 [2024-11-10 05:19:08.020818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:14.884 [2024-11-10 05:19:08.020829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.523 ms 00:17:14.884 [2024-11-10 05:19:08.020839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.884 [2024-11-10 05:19:08.022291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.884 [2024-11-10 05:19:08.022332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:14.884 [2024-11-10 05:19:08.022341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.380 ms 00:17:14.884 [2024-11-10 05:19:08.022350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.884 [2024-11-10 05:19:08.022391] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:14.884 [2024-11-10 05:19:08.022409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:14.884 [2024-11-10 05:19:08.022420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:14.884 [2024-11-10 05:19:08.022433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:14.884 [2024-11-10 05:19:08.022441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:14.884 [2024-11-10 05:19:08.022450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:14.884 [2024-11-10 05:19:08.022458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:14.884 [2024-11-10 05:19:08.022469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:14.884 [2024-11-10 05:19:08.022476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:14.884 [2024-11-10 05:19:08.022490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:14.884 [2024-11-10 05:19:08.022498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:14.884 [2024-11-10 05:19:08.022507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:14.884 [2024-11-10 05:19:08.022514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:14.884 [2024-11-10 05:19:08.022523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:14.884 [2024-11-10 05:19:08.022531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:14.884 [2024-11-10 05:19:08.022540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:14.884 [2024-11-10 05:19:08.022548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:14.884 [2024-11-10 05:19:08.022557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:14.884 [2024-11-10 05:19:08.022564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:14.884 [2024-11-10 05:19:08.022576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:14.884 [2024-11-10 05:19:08.022583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:14.884 [2024-11-10 05:19:08.022593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:14.884 [2024-11-10 05:19:08.022600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:14.884 [2024-11-10 05:19:08.022609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:14.884 [2024-11-10 05:19:08.022617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:14.884 [2024-11-10 05:19:08.022626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:14.884 [2024-11-10 05:19:08.022633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:14.884 [2024-11-10 05:19:08.022642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:14.884 [2024-11-10 05:19:08.022651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:14.884 [2024-11-10 05:19:08.022662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:14.884 [2024-11-10 05:19:08.022669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:14.884 [2024-11-10 05:19:08.022679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:14.884 [2024-11-10 05:19:08.022686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:14.884 [2024-11-10 05:19:08.022696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:14.884 [2024-11-10 05:19:08.022703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:14.884 [2024-11-10 05:19:08.022716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:14.884 [2024-11-10 05:19:08.022723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:14.884 [2024-11-10 05:19:08.022733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:14.884 [2024-11-10 05:19:08.022740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:14.884 [2024-11-10 05:19:08.022750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:14.885 [2024-11-10 05:19:08.022757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:14.885 [2024-11-10 05:19:08.022766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:14.885 [2024-11-10 05:19:08.022774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:14.885 [2024-11-10 05:19:08.022783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:14.885 [2024-11-10 05:19:08.022791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:14.885 [2024-11-10 05:19:08.022800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:14.885 [2024-11-10 05:19:08.022808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:14.885 [2024-11-10 05:19:08.022817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:14.885 [2024-11-10 05:19:08.022824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:14.885 [2024-11-10 05:19:08.022833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:14.885 [2024-11-10 05:19:08.022840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:14.885 [2024-11-10 05:19:08.022852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:14.885 [2024-11-10 05:19:08.022859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:14.885 [2024-11-10 05:19:08.022868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:14.885 [2024-11-10 05:19:08.022876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:14.885 [2024-11-10 05:19:08.022885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:14.885 [2024-11-10 05:19:08.022892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:14.885 [2024-11-10 05:19:08.022901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:14.885 [2024-11-10 05:19:08.022908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:14.885 [2024-11-10 05:19:08.022917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:14.885 [2024-11-10 05:19:08.022925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:14.885 [2024-11-10 05:19:08.022936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:14.885 [2024-11-10 05:19:08.022943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:14.885 [2024-11-10 05:19:08.022954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:14.885 [2024-11-10 05:19:08.022961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:14.885 [2024-11-10 05:19:08.022971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:14.885 [2024-11-10 05:19:08.022979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:14.885 [2024-11-10 05:19:08.023012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:14.885 [2024-11-10 05:19:08.023020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:14.885 [2024-11-10 05:19:08.023030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:14.885 [2024-11-10 05:19:08.023037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:14.885 [2024-11-10 05:19:08.023046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:14.885 [2024-11-10 05:19:08.023054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:14.885 [2024-11-10 05:19:08.023064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:14.885 [2024-11-10 05:19:08.023072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:14.885 [2024-11-10 05:19:08.023082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:14.885 [2024-11-10 05:19:08.023089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:14.885 [2024-11-10 05:19:08.023098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:14.885 [2024-11-10 05:19:08.023106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:14.885 [2024-11-10 05:19:08.023115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:14.885 [2024-11-10 05:19:08.023123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:14.885 [2024-11-10 05:19:08.023133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:14.885 [2024-11-10 05:19:08.023141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:14.885 [2024-11-10 05:19:08.023153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:14.885 [2024-11-10 05:19:08.023161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:14.885 [2024-11-10 05:19:08.023171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:14.885 [2024-11-10 05:19:08.023178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:14.885 [2024-11-10 05:19:08.023187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:14.885 [2024-11-10 05:19:08.023209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:14.885 [2024-11-10 05:19:08.023220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:14.885 [2024-11-10 05:19:08.023227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:14.885 [2024-11-10 05:19:08.023237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:14.885 [2024-11-10 05:19:08.023245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:14.885 [2024-11-10 05:19:08.023255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:14.885 [2024-11-10 05:19:08.023263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:14.885 [2024-11-10 05:19:08.023273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:14.885 [2024-11-10 05:19:08.023280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:14.885 [2024-11-10 05:19:08.023290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:14.885 [2024-11-10 05:19:08.023297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:14.885 [2024-11-10 05:19:08.023309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:14.885 [2024-11-10 05:19:08.023317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:14.885 [2024-11-10 05:19:08.023335] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:14.885 [2024-11-10 05:19:08.023343] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: cc1e5a32-ec08-4380-949d-289ffcdd7e88 00:17:14.885 [2024-11-10 05:19:08.023353] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:14.885 [2024-11-10 05:19:08.023361] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:14.885 [2024-11-10 05:19:08.023370] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:14.885 [2024-11-10 05:19:08.023381] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:14.885 [2024-11-10 05:19:08.023390] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:14.885 [2024-11-10 05:19:08.023398] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:14.885 [2024-11-10 05:19:08.023411] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:14.885 [2024-11-10 05:19:08.023417] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:14.885 [2024-11-10 05:19:08.023427] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:14.885 [2024-11-10 05:19:08.023435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.885 [2024-11-10 05:19:08.023445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:14.885 [2024-11-10 05:19:08.023453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.045 ms 00:17:14.885 [2024-11-10 05:19:08.023465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.885 [2024-11-10 05:19:08.025685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.885 [2024-11-10 05:19:08.025717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:14.885 [2024-11-10 05:19:08.025727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.185 ms 00:17:14.885 [2024-11-10 05:19:08.025738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.885 [2024-11-10 05:19:08.025847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.885 [2024-11-10 05:19:08.025858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:14.885 [2024-11-10 05:19:08.025870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:17:14.885 [2024-11-10 05:19:08.025880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.886 [2024-11-10 05:19:08.033640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:14.886 [2024-11-10 05:19:08.033686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:14.886 [2024-11-10 05:19:08.033695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:14.886 [2024-11-10 05:19:08.033705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.886 [2024-11-10 05:19:08.033797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:14.886 [2024-11-10 05:19:08.033810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:14.886 [2024-11-10 05:19:08.033818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:14.886 [2024-11-10 05:19:08.033830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.886 [2024-11-10 05:19:08.033880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:14.886 [2024-11-10 05:19:08.033894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:14.886 [2024-11-10 05:19:08.033902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:14.886 [2024-11-10 05:19:08.033912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.886 [2024-11-10 05:19:08.033931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:14.886 [2024-11-10 05:19:08.033942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:14.886 [2024-11-10 05:19:08.033950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:14.886 [2024-11-10 05:19:08.033960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.886 [2024-11-10 05:19:08.048025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:14.886 [2024-11-10 05:19:08.048081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:14.886 [2024-11-10 05:19:08.048092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:14.886 [2024-11-10 05:19:08.048102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.886 [2024-11-10 05:19:08.058933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:14.886 [2024-11-10 05:19:08.059001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:14.886 [2024-11-10 05:19:08.059013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:14.886 [2024-11-10 05:19:08.059026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.886 [2024-11-10 05:19:08.059089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:14.886 [2024-11-10 05:19:08.059103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:14.886 [2024-11-10 05:19:08.059111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:14.886 [2024-11-10 05:19:08.059130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.886 [2024-11-10 05:19:08.059164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:14.886 [2024-11-10 05:19:08.059175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:14.886 [2024-11-10 05:19:08.059183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:14.886 [2024-11-10 05:19:08.059193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.886 [2024-11-10 05:19:08.059268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:14.886 [2024-11-10 05:19:08.059281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:14.886 [2024-11-10 05:19:08.059289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:14.886 [2024-11-10 05:19:08.059303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.886 [2024-11-10 05:19:08.059340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:14.886 [2024-11-10 05:19:08.059352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:14.886 [2024-11-10 05:19:08.059360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:14.886 [2024-11-10 05:19:08.059374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.886 [2024-11-10 05:19:08.059415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:14.886 [2024-11-10 05:19:08.059428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:14.886 [2024-11-10 05:19:08.059437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:14.886 [2024-11-10 05:19:08.059447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.886 [2024-11-10 05:19:08.059498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:14.886 [2024-11-10 05:19:08.059511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:14.886 [2024-11-10 05:19:08.059519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:14.886 [2024-11-10 05:19:08.059529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.886 [2024-11-10 05:19:08.059684] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 66.239 ms, result 0 00:17:15.148 05:19:08 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:15.148 05:19:08 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:15.409 [2024-11-10 05:19:08.390057] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:15.409 [2024-11-10 05:19:08.390210] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85707 ] 00:17:15.409 [2024-11-10 05:19:08.543456] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:15.409 [2024-11-10 05:19:08.592716] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:15.672 [2024-11-10 05:19:08.708739] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:15.672 [2024-11-10 05:19:08.708823] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:15.672 [2024-11-10 05:19:08.869521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.672 [2024-11-10 05:19:08.869584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:15.672 [2024-11-10 05:19:08.869599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:15.672 [2024-11-10 05:19:08.869608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.672 [2024-11-10 05:19:08.875504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.672 [2024-11-10 05:19:08.875626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:15.672 [2024-11-10 05:19:08.875672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.868 ms 00:17:15.672 [2024-11-10 05:19:08.875700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.672 [2024-11-10 05:19:08.876538] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:15.672 [2024-11-10 05:19:08.877417] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:15.672 [2024-11-10 05:19:08.877506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.672 [2024-11-10 05:19:08.877536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:15.672 [2024-11-10 05:19:08.877570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.004 ms 00:17:15.672 [2024-11-10 05:19:08.877593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.672 [2024-11-10 05:19:08.880067] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:15.672 [2024-11-10 05:19:08.885202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.672 [2024-11-10 05:19:08.885255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:15.672 [2024-11-10 05:19:08.885267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.142 ms 00:17:15.672 [2024-11-10 05:19:08.885279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.672 [2024-11-10 05:19:08.885357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.672 [2024-11-10 05:19:08.885368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:15.672 [2024-11-10 05:19:08.885377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:17:15.672 [2024-11-10 05:19:08.885385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.672 [2024-11-10 05:19:08.893159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.672 [2024-11-10 05:19:08.893200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:15.672 [2024-11-10 05:19:08.893211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.724 ms 00:17:15.672 [2024-11-10 05:19:08.893221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.672 [2024-11-10 05:19:08.893358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.672 [2024-11-10 05:19:08.893370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:15.672 [2024-11-10 05:19:08.893380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:17:15.672 [2024-11-10 05:19:08.893387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.672 [2024-11-10 05:19:08.893416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.672 [2024-11-10 05:19:08.893425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:15.672 [2024-11-10 05:19:08.893433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:15.672 [2024-11-10 05:19:08.893440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.672 [2024-11-10 05:19:08.893460] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:15.672 [2024-11-10 05:19:08.895495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.672 [2024-11-10 05:19:08.895536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:15.672 [2024-11-10 05:19:08.895549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.039 ms 00:17:15.672 [2024-11-10 05:19:08.895560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.672 [2024-11-10 05:19:08.895605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.672 [2024-11-10 05:19:08.895614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:15.672 [2024-11-10 05:19:08.895626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:15.672 [2024-11-10 05:19:08.895634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.672 [2024-11-10 05:19:08.895652] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:15.672 [2024-11-10 05:19:08.895671] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:15.672 [2024-11-10 05:19:08.895711] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:15.672 [2024-11-10 05:19:08.895731] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:15.672 [2024-11-10 05:19:08.895836] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:15.672 [2024-11-10 05:19:08.895847] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:15.672 [2024-11-10 05:19:08.895858] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:15.672 [2024-11-10 05:19:08.895876] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:15.672 [2024-11-10 05:19:08.895886] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:15.672 [2024-11-10 05:19:08.895923] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:15.672 [2024-11-10 05:19:08.895935] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:15.672 [2024-11-10 05:19:08.895944] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:15.672 [2024-11-10 05:19:08.895952] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:15.672 [2024-11-10 05:19:08.895960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.672 [2024-11-10 05:19:08.895975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:15.672 [2024-11-10 05:19:08.895987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.311 ms 00:17:15.672 [2024-11-10 05:19:08.896011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.672 [2024-11-10 05:19:08.896104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.672 [2024-11-10 05:19:08.896113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:15.672 [2024-11-10 05:19:08.896121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:15.672 [2024-11-10 05:19:08.896129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.672 [2024-11-10 05:19:08.896234] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:15.672 [2024-11-10 05:19:08.896265] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:15.672 [2024-11-10 05:19:08.896278] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:15.672 [2024-11-10 05:19:08.896287] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:15.672 [2024-11-10 05:19:08.896300] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:15.672 [2024-11-10 05:19:08.896309] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:15.672 [2024-11-10 05:19:08.896317] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:15.672 [2024-11-10 05:19:08.896325] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:15.672 [2024-11-10 05:19:08.896336] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:15.672 [2024-11-10 05:19:08.896344] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:15.672 [2024-11-10 05:19:08.896354] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:15.672 [2024-11-10 05:19:08.896361] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:15.672 [2024-11-10 05:19:08.896369] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:15.672 [2024-11-10 05:19:08.896377] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:15.672 [2024-11-10 05:19:08.896385] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:15.672 [2024-11-10 05:19:08.896393] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:15.672 [2024-11-10 05:19:08.896402] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:15.672 [2024-11-10 05:19:08.896410] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:15.672 [2024-11-10 05:19:08.896418] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:15.672 [2024-11-10 05:19:08.896430] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:15.672 [2024-11-10 05:19:08.896438] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:15.672 [2024-11-10 05:19:08.896446] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:15.673 [2024-11-10 05:19:08.896453] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:15.673 [2024-11-10 05:19:08.896461] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:15.673 [2024-11-10 05:19:08.896474] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:15.673 [2024-11-10 05:19:08.896483] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:15.673 [2024-11-10 05:19:08.896491] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:15.673 [2024-11-10 05:19:08.896500] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:15.673 [2024-11-10 05:19:08.896508] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:15.673 [2024-11-10 05:19:08.896517] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:15.673 [2024-11-10 05:19:08.896525] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:15.673 [2024-11-10 05:19:08.896532] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:15.673 [2024-11-10 05:19:08.896540] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:15.673 [2024-11-10 05:19:08.896548] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:15.673 [2024-11-10 05:19:08.896557] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:15.673 [2024-11-10 05:19:08.896565] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:15.673 [2024-11-10 05:19:08.896572] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:15.673 [2024-11-10 05:19:08.896581] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:15.673 [2024-11-10 05:19:08.896589] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:15.673 [2024-11-10 05:19:08.896596] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:15.673 [2024-11-10 05:19:08.896606] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:15.673 [2024-11-10 05:19:08.896615] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:15.673 [2024-11-10 05:19:08.896623] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:15.673 [2024-11-10 05:19:08.896630] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:15.673 [2024-11-10 05:19:08.896639] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:15.673 [2024-11-10 05:19:08.896648] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:15.673 [2024-11-10 05:19:08.896657] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:15.673 [2024-11-10 05:19:08.896665] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:15.673 [2024-11-10 05:19:08.896672] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:15.673 [2024-11-10 05:19:08.896680] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:15.673 [2024-11-10 05:19:08.896687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:15.673 [2024-11-10 05:19:08.896695] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:15.673 [2024-11-10 05:19:08.896703] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:15.673 [2024-11-10 05:19:08.896711] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:15.673 [2024-11-10 05:19:08.896721] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:15.673 [2024-11-10 05:19:08.896731] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:15.673 [2024-11-10 05:19:08.896745] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:15.673 [2024-11-10 05:19:08.896753] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:15.673 [2024-11-10 05:19:08.896760] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:15.673 [2024-11-10 05:19:08.896768] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:15.673 [2024-11-10 05:19:08.896775] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:15.673 [2024-11-10 05:19:08.896783] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:15.673 [2024-11-10 05:19:08.896790] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:15.673 [2024-11-10 05:19:08.896797] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:15.673 [2024-11-10 05:19:08.896803] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:15.673 [2024-11-10 05:19:08.896810] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:15.673 [2024-11-10 05:19:08.896817] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:15.673 [2024-11-10 05:19:08.896825] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:15.673 [2024-11-10 05:19:08.896832] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:15.673 [2024-11-10 05:19:08.896839] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:15.673 [2024-11-10 05:19:08.896847] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:15.673 [2024-11-10 05:19:08.896857] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:15.673 [2024-11-10 05:19:08.896867] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:15.673 [2024-11-10 05:19:08.896874] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:15.673 [2024-11-10 05:19:08.896881] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:15.673 [2024-11-10 05:19:08.896889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.673 [2024-11-10 05:19:08.896899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:15.673 [2024-11-10 05:19:08.896907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.725 ms 00:17:15.673 [2024-11-10 05:19:08.896913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.936 [2024-11-10 05:19:08.919726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.936 [2024-11-10 05:19:08.919782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:15.936 [2024-11-10 05:19:08.919795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.757 ms 00:17:15.936 [2024-11-10 05:19:08.919804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.936 [2024-11-10 05:19:08.920011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.936 [2024-11-10 05:19:08.920025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:15.936 [2024-11-10 05:19:08.920044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:17:15.936 [2024-11-10 05:19:08.920056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.936 [2024-11-10 05:19:08.932855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.936 [2024-11-10 05:19:08.932904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:15.936 [2024-11-10 05:19:08.932915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.774 ms 00:17:15.936 [2024-11-10 05:19:08.932923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.936 [2024-11-10 05:19:08.933036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.936 [2024-11-10 05:19:08.933054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:15.936 [2024-11-10 05:19:08.933064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:15.936 [2024-11-10 05:19:08.933072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.936 [2024-11-10 05:19:08.933601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.936 [2024-11-10 05:19:08.933640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:15.936 [2024-11-10 05:19:08.933651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.503 ms 00:17:15.936 [2024-11-10 05:19:08.933660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.936 [2024-11-10 05:19:08.933818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.936 [2024-11-10 05:19:08.933828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:15.936 [2024-11-10 05:19:08.933842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:17:15.936 [2024-11-10 05:19:08.933850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.936 [2024-11-10 05:19:08.941074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.936 [2024-11-10 05:19:08.941124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:15.936 [2024-11-10 05:19:08.941134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.200 ms 00:17:15.936 [2024-11-10 05:19:08.941142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.936 [2024-11-10 05:19:08.944865] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:15.936 [2024-11-10 05:19:08.944918] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:15.936 [2024-11-10 05:19:08.944931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.936 [2024-11-10 05:19:08.944940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:15.936 [2024-11-10 05:19:08.944949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.696 ms 00:17:15.936 [2024-11-10 05:19:08.944957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.937 [2024-11-10 05:19:08.960720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.937 [2024-11-10 05:19:08.960772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:15.937 [2024-11-10 05:19:08.960784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.672 ms 00:17:15.937 [2024-11-10 05:19:08.960793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.937 [2024-11-10 05:19:08.963673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.937 [2024-11-10 05:19:08.963718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:15.937 [2024-11-10 05:19:08.963727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.791 ms 00:17:15.937 [2024-11-10 05:19:08.963734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.937 [2024-11-10 05:19:08.966255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.937 [2024-11-10 05:19:08.966298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:15.937 [2024-11-10 05:19:08.966317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.467 ms 00:17:15.937 [2024-11-10 05:19:08.966324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.937 [2024-11-10 05:19:08.966670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.937 [2024-11-10 05:19:08.966704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:15.937 [2024-11-10 05:19:08.966717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:17:15.937 [2024-11-10 05:19:08.966728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.937 [2024-11-10 05:19:08.990486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.937 [2024-11-10 05:19:08.990543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:15.937 [2024-11-10 05:19:08.990557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.735 ms 00:17:15.937 [2024-11-10 05:19:08.990566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.937 [2024-11-10 05:19:08.999353] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:15.937 [2024-11-10 05:19:09.018266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.937 [2024-11-10 05:19:09.018319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:15.937 [2024-11-10 05:19:09.018333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.603 ms 00:17:15.937 [2024-11-10 05:19:09.018342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.937 [2024-11-10 05:19:09.018430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.937 [2024-11-10 05:19:09.018442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:15.937 [2024-11-10 05:19:09.018452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:17:15.937 [2024-11-10 05:19:09.018465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.937 [2024-11-10 05:19:09.018523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.937 [2024-11-10 05:19:09.018533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:15.937 [2024-11-10 05:19:09.018547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:17:15.937 [2024-11-10 05:19:09.018554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.937 [2024-11-10 05:19:09.018582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.937 [2024-11-10 05:19:09.018590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:15.937 [2024-11-10 05:19:09.018599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:15.937 [2024-11-10 05:19:09.018607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.937 [2024-11-10 05:19:09.018647] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:15.937 [2024-11-10 05:19:09.018658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.937 [2024-11-10 05:19:09.018666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:15.937 [2024-11-10 05:19:09.018675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:15.937 [2024-11-10 05:19:09.018683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.937 [2024-11-10 05:19:09.024716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.937 [2024-11-10 05:19:09.024767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:15.937 [2024-11-10 05:19:09.024779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.012 ms 00:17:15.937 [2024-11-10 05:19:09.024789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.937 [2024-11-10 05:19:09.024889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.937 [2024-11-10 05:19:09.024900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:15.937 [2024-11-10 05:19:09.024910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:17:15.937 [2024-11-10 05:19:09.024918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.937 [2024-11-10 05:19:09.026113] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:15.937 [2024-11-10 05:19:09.027471] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 156.240 ms, result 0 00:17:15.937 [2024-11-10 05:19:09.028375] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:15.937 [2024-11-10 05:19:09.036136] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:16.886  [2024-11-10T05:19:11.068Z] Copying: 13/256 [MB] (13 MBps) [2024-11-10T05:19:12.454Z] Copying: 32/256 [MB] (18 MBps) [2024-11-10T05:19:13.388Z] Copying: 61/256 [MB] (29 MBps) [2024-11-10T05:19:14.322Z] Copying: 80/256 [MB] (18 MBps) [2024-11-10T05:19:15.255Z] Copying: 105/256 [MB] (25 MBps) [2024-11-10T05:19:16.189Z] Copying: 118/256 [MB] (13 MBps) [2024-11-10T05:19:17.122Z] Copying: 140/256 [MB] (21 MBps) [2024-11-10T05:19:18.058Z] Copying: 163/256 [MB] (22 MBps) [2024-11-10T05:19:19.443Z] Copying: 185/256 [MB] (21 MBps) [2024-11-10T05:19:20.378Z] Copying: 205/256 [MB] (20 MBps) [2024-11-10T05:19:21.314Z] Copying: 233/256 [MB] (27 MBps) [2024-11-10T05:19:21.314Z] Copying: 250/256 [MB] (17 MBps) [2024-11-10T05:19:21.314Z] Copying: 256/256 [MB] (average 20 MBps)[2024-11-10 05:19:21.242943] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:28.078 [2024-11-10 05:19:21.244078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.078 [2024-11-10 05:19:21.244109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:28.078 [2024-11-10 05:19:21.244121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:28.078 [2024-11-10 05:19:21.244129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.078 [2024-11-10 05:19:21.244149] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:28.078 [2024-11-10 05:19:21.244558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.078 [2024-11-10 05:19:21.244579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:28.078 [2024-11-10 05:19:21.244588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.398 ms 00:17:28.078 [2024-11-10 05:19:21.244595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.078 [2024-11-10 05:19:21.244841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.078 [2024-11-10 05:19:21.244860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:28.078 [2024-11-10 05:19:21.244868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.228 ms 00:17:28.078 [2024-11-10 05:19:21.244875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.078 [2024-11-10 05:19:21.248558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.078 [2024-11-10 05:19:21.248581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:28.078 [2024-11-10 05:19:21.248591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.665 ms 00:17:28.078 [2024-11-10 05:19:21.248598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.078 [2024-11-10 05:19:21.255520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.078 [2024-11-10 05:19:21.255543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:28.078 [2024-11-10 05:19:21.255552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.907 ms 00:17:28.078 [2024-11-10 05:19:21.255559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.078 [2024-11-10 05:19:21.257816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.078 [2024-11-10 05:19:21.257848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:28.078 [2024-11-10 05:19:21.257857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.199 ms 00:17:28.078 [2024-11-10 05:19:21.257871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.078 [2024-11-10 05:19:21.261887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.078 [2024-11-10 05:19:21.261925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:28.078 [2024-11-10 05:19:21.261934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.985 ms 00:17:28.078 [2024-11-10 05:19:21.261941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.078 [2024-11-10 05:19:21.262065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.078 [2024-11-10 05:19:21.262074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:28.078 [2024-11-10 05:19:21.262082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:17:28.078 [2024-11-10 05:19:21.262089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.078 [2024-11-10 05:19:21.264629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.078 [2024-11-10 05:19:21.264660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:28.078 [2024-11-10 05:19:21.264668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.521 ms 00:17:28.078 [2024-11-10 05:19:21.264675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.078 [2024-11-10 05:19:21.266600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.078 [2024-11-10 05:19:21.266628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:28.078 [2024-11-10 05:19:21.266636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.896 ms 00:17:28.078 [2024-11-10 05:19:21.266643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.078 [2024-11-10 05:19:21.268380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.078 [2024-11-10 05:19:21.268413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:28.079 [2024-11-10 05:19:21.268421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.709 ms 00:17:28.079 [2024-11-10 05:19:21.268428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.079 [2024-11-10 05:19:21.270174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.079 [2024-11-10 05:19:21.270202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:28.079 [2024-11-10 05:19:21.270209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.691 ms 00:17:28.079 [2024-11-10 05:19:21.270216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.079 [2024-11-10 05:19:21.270243] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:28.079 [2024-11-10 05:19:21.270256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:28.079 [2024-11-10 05:19:21.270265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:28.079 [2024-11-10 05:19:21.270273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:28.079 [2024-11-10 05:19:21.270280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:28.079 [2024-11-10 05:19:21.270287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:28.079 [2024-11-10 05:19:21.270294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:28.079 [2024-11-10 05:19:21.270301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:28.079 [2024-11-10 05:19:21.270308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:28.079 [2024-11-10 05:19:21.270316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:28.079 [2024-11-10 05:19:21.270323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:28.079 [2024-11-10 05:19:21.270330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:28.079 [2024-11-10 05:19:21.270337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:28.079 [2024-11-10 05:19:21.270344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:28.079 [2024-11-10 05:19:21.270351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:28.079 [2024-11-10 05:19:21.270358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:28.079 [2024-11-10 05:19:21.270365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:28.079 [2024-11-10 05:19:21.270372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:28.079 [2024-11-10 05:19:21.270379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:28.079 [2024-11-10 05:19:21.270386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:28.079 [2024-11-10 05:19:21.270393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:28.079 [2024-11-10 05:19:21.270399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:28.079 [2024-11-10 05:19:21.270406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:28.079 [2024-11-10 05:19:21.270413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:28.079 [2024-11-10 05:19:21.270420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:28.079 [2024-11-10 05:19:21.270426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:28.079 [2024-11-10 05:19:21.270433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:28.079 [2024-11-10 05:19:21.270440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:28.079 [2024-11-10 05:19:21.270447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:28.079 [2024-11-10 05:19:21.270455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:28.079 [2024-11-10 05:19:21.270462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:28.079 [2024-11-10 05:19:21.270470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:28.079 [2024-11-10 05:19:21.270477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:28.079 [2024-11-10 05:19:21.270484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:28.079 [2024-11-10 05:19:21.270491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:28.079 [2024-11-10 05:19:21.270498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:28.079 [2024-11-10 05:19:21.270505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:28.079 [2024-11-10 05:19:21.270512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:28.079 [2024-11-10 05:19:21.270519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:28.079 [2024-11-10 05:19:21.270526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:28.079 [2024-11-10 05:19:21.270533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:28.079 [2024-11-10 05:19:21.270540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:28.079 [2024-11-10 05:19:21.270547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:28.079 [2024-11-10 05:19:21.270554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:28.079 [2024-11-10 05:19:21.270560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:28.079 [2024-11-10 05:19:21.270568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:28.079 [2024-11-10 05:19:21.270575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:28.079 [2024-11-10 05:19:21.270582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:28.079 [2024-11-10 05:19:21.270589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:28.079 [2024-11-10 05:19:21.270596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:28.079 [2024-11-10 05:19:21.270603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:28.079 [2024-11-10 05:19:21.270610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:28.079 [2024-11-10 05:19:21.270617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:28.079 [2024-11-10 05:19:21.270624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:28.079 [2024-11-10 05:19:21.270631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:28.079 [2024-11-10 05:19:21.270638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:28.079 [2024-11-10 05:19:21.270645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:28.079 [2024-11-10 05:19:21.270652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:28.079 [2024-11-10 05:19:21.270659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:28.079 [2024-11-10 05:19:21.270666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:28.079 [2024-11-10 05:19:21.270673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:28.079 [2024-11-10 05:19:21.270680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:28.079 [2024-11-10 05:19:21.270687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:28.079 [2024-11-10 05:19:21.270696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:28.079 [2024-11-10 05:19:21.270703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:28.079 [2024-11-10 05:19:21.270710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:28.079 [2024-11-10 05:19:21.270717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:28.079 [2024-11-10 05:19:21.270724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:28.079 [2024-11-10 05:19:21.270732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:28.079 [2024-11-10 05:19:21.270740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:28.079 [2024-11-10 05:19:21.270747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:28.079 [2024-11-10 05:19:21.270754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:28.079 [2024-11-10 05:19:21.270761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:28.079 [2024-11-10 05:19:21.270768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:28.079 [2024-11-10 05:19:21.270776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:28.079 [2024-11-10 05:19:21.270783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:28.079 [2024-11-10 05:19:21.270791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:28.079 [2024-11-10 05:19:21.270797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:28.080 [2024-11-10 05:19:21.270805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:28.080 [2024-11-10 05:19:21.270812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:28.080 [2024-11-10 05:19:21.270819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:28.080 [2024-11-10 05:19:21.270826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:28.080 [2024-11-10 05:19:21.270833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:28.080 [2024-11-10 05:19:21.270840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:28.080 [2024-11-10 05:19:21.270847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:28.080 [2024-11-10 05:19:21.270854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:28.080 [2024-11-10 05:19:21.270862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:28.080 [2024-11-10 05:19:21.270869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:28.080 [2024-11-10 05:19:21.270876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:28.080 [2024-11-10 05:19:21.270883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:28.080 [2024-11-10 05:19:21.270890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:28.080 [2024-11-10 05:19:21.270897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:28.080 [2024-11-10 05:19:21.270904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:28.080 [2024-11-10 05:19:21.270915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:28.080 [2024-11-10 05:19:21.270922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:28.080 [2024-11-10 05:19:21.270931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:28.080 [2024-11-10 05:19:21.270938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:28.080 [2024-11-10 05:19:21.270945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:28.080 [2024-11-10 05:19:21.270952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:28.080 [2024-11-10 05:19:21.270960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:28.080 [2024-11-10 05:19:21.270967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:28.080 [2024-11-10 05:19:21.270982] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:28.080 [2024-11-10 05:19:21.271005] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: cc1e5a32-ec08-4380-949d-289ffcdd7e88 00:17:28.080 [2024-11-10 05:19:21.271018] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:28.080 [2024-11-10 05:19:21.271029] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:28.080 [2024-11-10 05:19:21.271036] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:28.080 [2024-11-10 05:19:21.271043] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:28.080 [2024-11-10 05:19:21.271050] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:28.080 [2024-11-10 05:19:21.271058] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:28.080 [2024-11-10 05:19:21.271065] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:28.080 [2024-11-10 05:19:21.271071] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:28.080 [2024-11-10 05:19:21.271077] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:28.080 [2024-11-10 05:19:21.271084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.080 [2024-11-10 05:19:21.271094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:28.080 [2024-11-10 05:19:21.271102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.842 ms 00:17:28.080 [2024-11-10 05:19:21.271109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.080 [2024-11-10 05:19:21.272540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.080 [2024-11-10 05:19:21.272563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:28.080 [2024-11-10 05:19:21.272571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.416 ms 00:17:28.080 [2024-11-10 05:19:21.272578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.080 [2024-11-10 05:19:21.272654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.080 [2024-11-10 05:19:21.272662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:28.080 [2024-11-10 05:19:21.272670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:17:28.080 [2024-11-10 05:19:21.272677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.080 [2024-11-10 05:19:21.277296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.080 [2024-11-10 05:19:21.277327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:28.080 [2024-11-10 05:19:21.277337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.080 [2024-11-10 05:19:21.277344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.080 [2024-11-10 05:19:21.277408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.080 [2024-11-10 05:19:21.277416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:28.080 [2024-11-10 05:19:21.277423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.080 [2024-11-10 05:19:21.277434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.080 [2024-11-10 05:19:21.277468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.080 [2024-11-10 05:19:21.277477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:28.080 [2024-11-10 05:19:21.277484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.080 [2024-11-10 05:19:21.277491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.080 [2024-11-10 05:19:21.277507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.080 [2024-11-10 05:19:21.277516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:28.080 [2024-11-10 05:19:21.277526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.080 [2024-11-10 05:19:21.277533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.080 [2024-11-10 05:19:21.285863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.080 [2024-11-10 05:19:21.285901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:28.080 [2024-11-10 05:19:21.285911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.080 [2024-11-10 05:19:21.285919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.080 [2024-11-10 05:19:21.292582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.080 [2024-11-10 05:19:21.292618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:28.080 [2024-11-10 05:19:21.292627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.080 [2024-11-10 05:19:21.292635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.080 [2024-11-10 05:19:21.292673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.080 [2024-11-10 05:19:21.292681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:28.080 [2024-11-10 05:19:21.292689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.080 [2024-11-10 05:19:21.292696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.080 [2024-11-10 05:19:21.292723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.080 [2024-11-10 05:19:21.292731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:28.080 [2024-11-10 05:19:21.292743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.080 [2024-11-10 05:19:21.292750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.080 [2024-11-10 05:19:21.292811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.080 [2024-11-10 05:19:21.292821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:28.080 [2024-11-10 05:19:21.292828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.080 [2024-11-10 05:19:21.292835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.080 [2024-11-10 05:19:21.292864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.080 [2024-11-10 05:19:21.292872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:28.080 [2024-11-10 05:19:21.292880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.080 [2024-11-10 05:19:21.292889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.080 [2024-11-10 05:19:21.292921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.080 [2024-11-10 05:19:21.292929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:28.080 [2024-11-10 05:19:21.292937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.080 [2024-11-10 05:19:21.292944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.080 [2024-11-10 05:19:21.292983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.080 [2024-11-10 05:19:21.293071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:28.080 [2024-11-10 05:19:21.293087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.081 [2024-11-10 05:19:21.293094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.081 [2024-11-10 05:19:21.293219] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 49.124 ms, result 0 00:17:28.339 00:17:28.339 00:17:28.339 05:19:21 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:17:28.339 05:19:21 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:28.907 05:19:22 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:28.907 [2024-11-10 05:19:22.082601] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:28.907 [2024-11-10 05:19:22.082724] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85855 ] 00:17:29.164 [2024-11-10 05:19:22.231372] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:29.164 [2024-11-10 05:19:22.263762] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:29.164 [2024-11-10 05:19:22.349746] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:29.164 [2024-11-10 05:19:22.349811] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:29.424 [2024-11-10 05:19:22.506393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.424 [2024-11-10 05:19:22.506436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:29.424 [2024-11-10 05:19:22.506452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:29.424 [2024-11-10 05:19:22.506460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.424 [2024-11-10 05:19:22.508721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.424 [2024-11-10 05:19:22.508756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:29.424 [2024-11-10 05:19:22.508768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.246 ms 00:17:29.424 [2024-11-10 05:19:22.508775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.424 [2024-11-10 05:19:22.508841] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:29.424 [2024-11-10 05:19:22.509327] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:29.424 [2024-11-10 05:19:22.509370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.424 [2024-11-10 05:19:22.509380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:29.424 [2024-11-10 05:19:22.509392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.535 ms 00:17:29.424 [2024-11-10 05:19:22.509400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.424 [2024-11-10 05:19:22.510521] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:29.424 [2024-11-10 05:19:22.513041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.424 [2024-11-10 05:19:22.513080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:29.424 [2024-11-10 05:19:22.513090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.521 ms 00:17:29.424 [2024-11-10 05:19:22.513099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.424 [2024-11-10 05:19:22.513153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.425 [2024-11-10 05:19:22.513162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:29.425 [2024-11-10 05:19:22.513171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:17:29.425 [2024-11-10 05:19:22.513177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.425 [2024-11-10 05:19:22.517849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.425 [2024-11-10 05:19:22.517879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:29.425 [2024-11-10 05:19:22.517891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.637 ms 00:17:29.425 [2024-11-10 05:19:22.517898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.425 [2024-11-10 05:19:22.518010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.425 [2024-11-10 05:19:22.518023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:29.425 [2024-11-10 05:19:22.518032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:17:29.425 [2024-11-10 05:19:22.518039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.425 [2024-11-10 05:19:22.518065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.425 [2024-11-10 05:19:22.518080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:29.425 [2024-11-10 05:19:22.518087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:29.425 [2024-11-10 05:19:22.518094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.425 [2024-11-10 05:19:22.518114] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:29.425 [2024-11-10 05:19:22.519417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.425 [2024-11-10 05:19:22.519441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:29.425 [2024-11-10 05:19:22.519454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.308 ms 00:17:29.425 [2024-11-10 05:19:22.519460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.425 [2024-11-10 05:19:22.519492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.425 [2024-11-10 05:19:22.519502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:29.425 [2024-11-10 05:19:22.519510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:29.425 [2024-11-10 05:19:22.519518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.425 [2024-11-10 05:19:22.519534] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:29.425 [2024-11-10 05:19:22.519555] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:29.425 [2024-11-10 05:19:22.519588] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:29.425 [2024-11-10 05:19:22.519603] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:29.425 [2024-11-10 05:19:22.519706] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:29.425 [2024-11-10 05:19:22.519715] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:29.425 [2024-11-10 05:19:22.519725] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:29.425 [2024-11-10 05:19:22.519734] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:29.425 [2024-11-10 05:19:22.519746] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:29.425 [2024-11-10 05:19:22.519753] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:29.425 [2024-11-10 05:19:22.519760] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:29.425 [2024-11-10 05:19:22.519767] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:29.425 [2024-11-10 05:19:22.519774] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:29.425 [2024-11-10 05:19:22.519781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.425 [2024-11-10 05:19:22.519788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:29.425 [2024-11-10 05:19:22.519799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.248 ms 00:17:29.425 [2024-11-10 05:19:22.519806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.425 [2024-11-10 05:19:22.519896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.425 [2024-11-10 05:19:22.519904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:29.425 [2024-11-10 05:19:22.519921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:29.425 [2024-11-10 05:19:22.519928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.425 [2024-11-10 05:19:22.520040] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:29.425 [2024-11-10 05:19:22.520054] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:29.425 [2024-11-10 05:19:22.520062] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:29.425 [2024-11-10 05:19:22.520073] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:29.425 [2024-11-10 05:19:22.520081] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:29.425 [2024-11-10 05:19:22.520088] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:29.425 [2024-11-10 05:19:22.520095] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:29.425 [2024-11-10 05:19:22.520103] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:29.425 [2024-11-10 05:19:22.520111] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:29.425 [2024-11-10 05:19:22.520122] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:29.425 [2024-11-10 05:19:22.520130] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:29.425 [2024-11-10 05:19:22.520137] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:29.425 [2024-11-10 05:19:22.520145] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:29.425 [2024-11-10 05:19:22.520152] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:29.425 [2024-11-10 05:19:22.520159] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:29.425 [2024-11-10 05:19:22.520166] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:29.425 [2024-11-10 05:19:22.520173] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:29.425 [2024-11-10 05:19:22.520183] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:29.425 [2024-11-10 05:19:22.520191] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:29.425 [2024-11-10 05:19:22.520198] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:29.425 [2024-11-10 05:19:22.520205] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:29.425 [2024-11-10 05:19:22.520213] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:29.425 [2024-11-10 05:19:22.520220] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:29.425 [2024-11-10 05:19:22.520227] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:29.425 [2024-11-10 05:19:22.520234] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:29.425 [2024-11-10 05:19:22.520246] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:29.425 [2024-11-10 05:19:22.520254] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:29.425 [2024-11-10 05:19:22.520261] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:29.425 [2024-11-10 05:19:22.520269] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:29.425 [2024-11-10 05:19:22.520276] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:29.425 [2024-11-10 05:19:22.520283] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:29.425 [2024-11-10 05:19:22.520290] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:29.425 [2024-11-10 05:19:22.520297] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:29.425 [2024-11-10 05:19:22.520304] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:29.425 [2024-11-10 05:19:22.520312] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:29.425 [2024-11-10 05:19:22.520319] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:29.425 [2024-11-10 05:19:22.520326] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:29.425 [2024-11-10 05:19:22.520333] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:29.425 [2024-11-10 05:19:22.520340] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:29.425 [2024-11-10 05:19:22.520348] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:29.425 [2024-11-10 05:19:22.520355] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:29.425 [2024-11-10 05:19:22.520364] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:29.425 [2024-11-10 05:19:22.520371] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:29.425 [2024-11-10 05:19:22.520379] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:29.425 [2024-11-10 05:19:22.520387] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:29.425 [2024-11-10 05:19:22.520395] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:29.426 [2024-11-10 05:19:22.520403] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:29.426 [2024-11-10 05:19:22.520411] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:29.426 [2024-11-10 05:19:22.520419] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:29.426 [2024-11-10 05:19:22.520427] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:29.426 [2024-11-10 05:19:22.520434] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:29.426 [2024-11-10 05:19:22.520442] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:29.426 [2024-11-10 05:19:22.520450] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:29.426 [2024-11-10 05:19:22.520458] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:29.426 [2024-11-10 05:19:22.520468] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:29.426 [2024-11-10 05:19:22.520477] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:29.426 [2024-11-10 05:19:22.520485] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:29.426 [2024-11-10 05:19:22.520495] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:29.426 [2024-11-10 05:19:22.520503] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:29.426 [2024-11-10 05:19:22.520512] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:29.426 [2024-11-10 05:19:22.520520] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:29.426 [2024-11-10 05:19:22.520527] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:29.426 [2024-11-10 05:19:22.520535] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:29.426 [2024-11-10 05:19:22.520543] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:29.426 [2024-11-10 05:19:22.520551] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:29.426 [2024-11-10 05:19:22.520559] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:29.426 [2024-11-10 05:19:22.520567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:29.426 [2024-11-10 05:19:22.520574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:29.426 [2024-11-10 05:19:22.520582] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:29.426 [2024-11-10 05:19:22.520590] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:29.426 [2024-11-10 05:19:22.520600] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:29.426 [2024-11-10 05:19:22.520608] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:29.426 [2024-11-10 05:19:22.520616] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:29.426 [2024-11-10 05:19:22.520625] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:29.426 [2024-11-10 05:19:22.520634] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:29.426 [2024-11-10 05:19:22.520642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.426 [2024-11-10 05:19:22.520650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:29.426 [2024-11-10 05:19:22.520660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.683 ms 00:17:29.426 [2024-11-10 05:19:22.520668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.426 [2024-11-10 05:19:22.537316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.426 [2024-11-10 05:19:22.537373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:29.426 [2024-11-10 05:19:22.537391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.584 ms 00:17:29.426 [2024-11-10 05:19:22.537415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.426 [2024-11-10 05:19:22.537617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.426 [2024-11-10 05:19:22.537635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:29.426 [2024-11-10 05:19:22.537649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:17:29.426 [2024-11-10 05:19:22.537670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.426 [2024-11-10 05:19:22.547431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.426 [2024-11-10 05:19:22.547477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:29.426 [2024-11-10 05:19:22.547497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.730 ms 00:17:29.426 [2024-11-10 05:19:22.547508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.426 [2024-11-10 05:19:22.547589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.426 [2024-11-10 05:19:22.547605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:29.426 [2024-11-10 05:19:22.547621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:29.426 [2024-11-10 05:19:22.547632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.426 [2024-11-10 05:19:22.548073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.426 [2024-11-10 05:19:22.548111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:29.426 [2024-11-10 05:19:22.548125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.410 ms 00:17:29.426 [2024-11-10 05:19:22.548145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.426 [2024-11-10 05:19:22.548334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.426 [2024-11-10 05:19:22.548350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:29.426 [2024-11-10 05:19:22.548363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.155 ms 00:17:29.426 [2024-11-10 05:19:22.548378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.426 [2024-11-10 05:19:22.553024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.426 [2024-11-10 05:19:22.553054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:29.426 [2024-11-10 05:19:22.553063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.611 ms 00:17:29.426 [2024-11-10 05:19:22.553073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.426 [2024-11-10 05:19:22.555581] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:29.426 [2024-11-10 05:19:22.555617] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:29.426 [2024-11-10 05:19:22.555628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.426 [2024-11-10 05:19:22.555635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:29.426 [2024-11-10 05:19:22.555643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.481 ms 00:17:29.426 [2024-11-10 05:19:22.555649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.426 [2024-11-10 05:19:22.570068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.426 [2024-11-10 05:19:22.570101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:29.426 [2024-11-10 05:19:22.570111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.378 ms 00:17:29.426 [2024-11-10 05:19:22.570118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.426 [2024-11-10 05:19:22.571887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.426 [2024-11-10 05:19:22.571936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:29.426 [2024-11-10 05:19:22.571945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.702 ms 00:17:29.426 [2024-11-10 05:19:22.571952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.426 [2024-11-10 05:19:22.573787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.426 [2024-11-10 05:19:22.573816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:29.427 [2024-11-10 05:19:22.573825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.799 ms 00:17:29.427 [2024-11-10 05:19:22.573837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.427 [2024-11-10 05:19:22.574155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.427 [2024-11-10 05:19:22.574171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:29.427 [2024-11-10 05:19:22.574179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:17:29.427 [2024-11-10 05:19:22.574193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.427 [2024-11-10 05:19:22.590058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.427 [2024-11-10 05:19:22.590099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:29.427 [2024-11-10 05:19:22.590111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.831 ms 00:17:29.427 [2024-11-10 05:19:22.590119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.427 [2024-11-10 05:19:22.597492] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:29.427 [2024-11-10 05:19:22.611200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.427 [2024-11-10 05:19:22.611238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:29.427 [2024-11-10 05:19:22.611250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.026 ms 00:17:29.427 [2024-11-10 05:19:22.611257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.427 [2024-11-10 05:19:22.611342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.427 [2024-11-10 05:19:22.611352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:29.427 [2024-11-10 05:19:22.611361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:29.427 [2024-11-10 05:19:22.611372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.427 [2024-11-10 05:19:22.611420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.427 [2024-11-10 05:19:22.611432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:29.427 [2024-11-10 05:19:22.611440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:17:29.427 [2024-11-10 05:19:22.611448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.427 [2024-11-10 05:19:22.611467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.427 [2024-11-10 05:19:22.611475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:29.427 [2024-11-10 05:19:22.611485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:29.427 [2024-11-10 05:19:22.611492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.427 [2024-11-10 05:19:22.611523] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:29.427 [2024-11-10 05:19:22.611535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.427 [2024-11-10 05:19:22.611544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:29.427 [2024-11-10 05:19:22.611552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:29.427 [2024-11-10 05:19:22.611559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.427 [2024-11-10 05:19:22.615521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.427 [2024-11-10 05:19:22.615553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:29.427 [2024-11-10 05:19:22.615563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.945 ms 00:17:29.427 [2024-11-10 05:19:22.615570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.427 [2024-11-10 05:19:22.615652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.427 [2024-11-10 05:19:22.615665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:29.427 [2024-11-10 05:19:22.615673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:17:29.427 [2024-11-10 05:19:22.615680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.427 [2024-11-10 05:19:22.616493] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:29.427 [2024-11-10 05:19:22.617461] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 109.835 ms, result 0 00:17:29.427 [2024-11-10 05:19:22.618575] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:29.427 [2024-11-10 05:19:22.627954] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:29.687  [2024-11-10T05:19:22.923Z] Copying: 4096/4096 [kB] (average 18 MBps)[2024-11-10 05:19:22.842237] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:29.687 [2024-11-10 05:19:22.842848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.687 [2024-11-10 05:19:22.842878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:29.687 [2024-11-10 05:19:22.842896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:29.687 [2024-11-10 05:19:22.842903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.687 [2024-11-10 05:19:22.842922] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:29.687 [2024-11-10 05:19:22.843342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.687 [2024-11-10 05:19:22.843363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:29.687 [2024-11-10 05:19:22.843372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.409 ms 00:17:29.687 [2024-11-10 05:19:22.843384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.687 [2024-11-10 05:19:22.845108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.687 [2024-11-10 05:19:22.845140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:29.687 [2024-11-10 05:19:22.845149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.705 ms 00:17:29.687 [2024-11-10 05:19:22.845156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.687 [2024-11-10 05:19:22.849240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.687 [2024-11-10 05:19:22.849264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:29.687 [2024-11-10 05:19:22.849273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.065 ms 00:17:29.687 [2024-11-10 05:19:22.849280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.687 [2024-11-10 05:19:22.856106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.687 [2024-11-10 05:19:22.856131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:29.687 [2024-11-10 05:19:22.856141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.802 ms 00:17:29.687 [2024-11-10 05:19:22.856148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.687 [2024-11-10 05:19:22.857590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.687 [2024-11-10 05:19:22.857622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:29.687 [2024-11-10 05:19:22.857630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.388 ms 00:17:29.687 [2024-11-10 05:19:22.857644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.687 [2024-11-10 05:19:22.861515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.687 [2024-11-10 05:19:22.861549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:29.688 [2024-11-10 05:19:22.861562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.841 ms 00:17:29.688 [2024-11-10 05:19:22.861570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.688 [2024-11-10 05:19:22.861682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.688 [2024-11-10 05:19:22.861690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:29.688 [2024-11-10 05:19:22.861698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:17:29.688 [2024-11-10 05:19:22.861705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.688 [2024-11-10 05:19:22.864369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.688 [2024-11-10 05:19:22.864411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:29.688 [2024-11-10 05:19:22.864422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.647 ms 00:17:29.688 [2024-11-10 05:19:22.864429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.688 [2024-11-10 05:19:22.866901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.688 [2024-11-10 05:19:22.866931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:29.688 [2024-11-10 05:19:22.866940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.438 ms 00:17:29.688 [2024-11-10 05:19:22.866947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.688 [2024-11-10 05:19:22.868421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.688 [2024-11-10 05:19:22.868453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:29.688 [2024-11-10 05:19:22.868461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.443 ms 00:17:29.688 [2024-11-10 05:19:22.868467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.688 [2024-11-10 05:19:22.869649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.688 [2024-11-10 05:19:22.869683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:29.688 [2024-11-10 05:19:22.869691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.127 ms 00:17:29.688 [2024-11-10 05:19:22.869698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.688 [2024-11-10 05:19:22.869725] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:29.688 [2024-11-10 05:19:22.869743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:29.688 [2024-11-10 05:19:22.869757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:29.688 [2024-11-10 05:19:22.869764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:29.688 [2024-11-10 05:19:22.869771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:29.688 [2024-11-10 05:19:22.869778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:29.688 [2024-11-10 05:19:22.869786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:29.688 [2024-11-10 05:19:22.869793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:29.688 [2024-11-10 05:19:22.869800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:29.688 [2024-11-10 05:19:22.869807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:29.688 [2024-11-10 05:19:22.869814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:29.688 [2024-11-10 05:19:22.869822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:29.688 [2024-11-10 05:19:22.869829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:29.688 [2024-11-10 05:19:22.869836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:29.688 [2024-11-10 05:19:22.869843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:29.688 [2024-11-10 05:19:22.869850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:29.688 [2024-11-10 05:19:22.869857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:29.688 [2024-11-10 05:19:22.869864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:29.688 [2024-11-10 05:19:22.869871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:29.688 [2024-11-10 05:19:22.869878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:29.688 [2024-11-10 05:19:22.869885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:29.688 [2024-11-10 05:19:22.869892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:29.688 [2024-11-10 05:19:22.869898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:29.688 [2024-11-10 05:19:22.869906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:29.688 [2024-11-10 05:19:22.869913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:29.688 [2024-11-10 05:19:22.869920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:29.688 [2024-11-10 05:19:22.869927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:29.688 [2024-11-10 05:19:22.869934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:29.688 [2024-11-10 05:19:22.869941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:29.688 [2024-11-10 05:19:22.869947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:29.688 [2024-11-10 05:19:22.869954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:29.688 [2024-11-10 05:19:22.869961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:29.688 [2024-11-10 05:19:22.869968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:29.688 [2024-11-10 05:19:22.869975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:29.688 [2024-11-10 05:19:22.869982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:29.688 [2024-11-10 05:19:22.870000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:29.688 [2024-11-10 05:19:22.870008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:29.688 [2024-11-10 05:19:22.870016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:29.688 [2024-11-10 05:19:22.870023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:29.688 [2024-11-10 05:19:22.870030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:29.688 [2024-11-10 05:19:22.870038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:29.688 [2024-11-10 05:19:22.870045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:29.688 [2024-11-10 05:19:22.870052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:29.688 [2024-11-10 05:19:22.870059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:29.688 [2024-11-10 05:19:22.870066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:29.688 [2024-11-10 05:19:22.870074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:29.688 [2024-11-10 05:19:22.870080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:29.688 [2024-11-10 05:19:22.870087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:29.688 [2024-11-10 05:19:22.870094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:29.688 [2024-11-10 05:19:22.870101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:29.688 [2024-11-10 05:19:22.870108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:29.688 [2024-11-10 05:19:22.870116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:29.688 [2024-11-10 05:19:22.870123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:29.688 [2024-11-10 05:19:22.870130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:29.688 [2024-11-10 05:19:22.870137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:29.688 [2024-11-10 05:19:22.870144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:29.688 [2024-11-10 05:19:22.870151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:29.688 [2024-11-10 05:19:22.870158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:29.688 [2024-11-10 05:19:22.870166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:29.688 [2024-11-10 05:19:22.870173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:29.688 [2024-11-10 05:19:22.870180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:29.688 [2024-11-10 05:19:22.870187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:29.688 [2024-11-10 05:19:22.870193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:29.688 [2024-11-10 05:19:22.870201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:29.688 [2024-11-10 05:19:22.870208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:29.688 [2024-11-10 05:19:22.870215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:29.688 [2024-11-10 05:19:22.870222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:29.688 [2024-11-10 05:19:22.870229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:29.688 [2024-11-10 05:19:22.870237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:29.688 [2024-11-10 05:19:22.870244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:29.688 [2024-11-10 05:19:22.870251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:29.689 [2024-11-10 05:19:22.870258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:29.689 [2024-11-10 05:19:22.870265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:29.689 [2024-11-10 05:19:22.870272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:29.689 [2024-11-10 05:19:22.870280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:29.689 [2024-11-10 05:19:22.870287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:29.689 [2024-11-10 05:19:22.870294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:29.689 [2024-11-10 05:19:22.870301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:29.689 [2024-11-10 05:19:22.870308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:29.689 [2024-11-10 05:19:22.870315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:29.689 [2024-11-10 05:19:22.870322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:29.689 [2024-11-10 05:19:22.870329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:29.689 [2024-11-10 05:19:22.870336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:29.689 [2024-11-10 05:19:22.870343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:29.689 [2024-11-10 05:19:22.870350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:29.689 [2024-11-10 05:19:22.870357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:29.689 [2024-11-10 05:19:22.870364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:29.689 [2024-11-10 05:19:22.870371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:29.689 [2024-11-10 05:19:22.870378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:29.689 [2024-11-10 05:19:22.870385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:29.689 [2024-11-10 05:19:22.870392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:29.689 [2024-11-10 05:19:22.870400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:29.689 [2024-11-10 05:19:22.870407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:29.689 [2024-11-10 05:19:22.870414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:29.689 [2024-11-10 05:19:22.870421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:29.689 [2024-11-10 05:19:22.870428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:29.689 [2024-11-10 05:19:22.870435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:29.689 [2024-11-10 05:19:22.870442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:29.689 [2024-11-10 05:19:22.870449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:29.689 [2024-11-10 05:19:22.870456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:29.689 [2024-11-10 05:19:22.870465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:29.689 [2024-11-10 05:19:22.870480] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:29.689 [2024-11-10 05:19:22.870487] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: cc1e5a32-ec08-4380-949d-289ffcdd7e88 00:17:29.689 [2024-11-10 05:19:22.870500] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:29.689 [2024-11-10 05:19:22.870513] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:29.689 [2024-11-10 05:19:22.870523] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:29.689 [2024-11-10 05:19:22.870530] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:29.689 [2024-11-10 05:19:22.870540] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:29.689 [2024-11-10 05:19:22.870547] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:29.689 [2024-11-10 05:19:22.870554] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:29.689 [2024-11-10 05:19:22.870560] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:29.689 [2024-11-10 05:19:22.870567] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:29.689 [2024-11-10 05:19:22.870573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.689 [2024-11-10 05:19:22.870580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:29.689 [2024-11-10 05:19:22.870590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.849 ms 00:17:29.689 [2024-11-10 05:19:22.870597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.689 [2024-11-10 05:19:22.871798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.689 [2024-11-10 05:19:22.871820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:29.689 [2024-11-10 05:19:22.871834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.185 ms 00:17:29.689 [2024-11-10 05:19:22.871841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.689 [2024-11-10 05:19:22.871930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.689 [2024-11-10 05:19:22.871939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:29.689 [2024-11-10 05:19:22.871947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:17:29.689 [2024-11-10 05:19:22.871954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.689 [2024-11-10 05:19:22.876551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:29.689 [2024-11-10 05:19:22.876582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:29.689 [2024-11-10 05:19:22.876591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:29.689 [2024-11-10 05:19:22.876598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.689 [2024-11-10 05:19:22.876664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:29.689 [2024-11-10 05:19:22.876675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:29.689 [2024-11-10 05:19:22.876682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:29.689 [2024-11-10 05:19:22.876689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.689 [2024-11-10 05:19:22.876722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:29.689 [2024-11-10 05:19:22.876730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:29.689 [2024-11-10 05:19:22.876737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:29.689 [2024-11-10 05:19:22.876744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.689 [2024-11-10 05:19:22.876759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:29.689 [2024-11-10 05:19:22.876770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:29.689 [2024-11-10 05:19:22.876779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:29.689 [2024-11-10 05:19:22.876786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.689 [2024-11-10 05:19:22.885144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:29.689 [2024-11-10 05:19:22.885181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:29.689 [2024-11-10 05:19:22.885190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:29.689 [2024-11-10 05:19:22.885197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.689 [2024-11-10 05:19:22.891845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:29.689 [2024-11-10 05:19:22.891884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:29.689 [2024-11-10 05:19:22.891894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:29.689 [2024-11-10 05:19:22.891901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.689 [2024-11-10 05:19:22.891947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:29.689 [2024-11-10 05:19:22.891956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:29.689 [2024-11-10 05:19:22.891963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:29.689 [2024-11-10 05:19:22.891971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.689 [2024-11-10 05:19:22.892058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:29.689 [2024-11-10 05:19:22.892072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:29.689 [2024-11-10 05:19:22.892080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:29.689 [2024-11-10 05:19:22.892089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.689 [2024-11-10 05:19:22.892150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:29.689 [2024-11-10 05:19:22.892159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:29.689 [2024-11-10 05:19:22.892167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:29.689 [2024-11-10 05:19:22.892174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.689 [2024-11-10 05:19:22.892203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:29.689 [2024-11-10 05:19:22.892212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:29.689 [2024-11-10 05:19:22.892219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:29.689 [2024-11-10 05:19:22.892226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.689 [2024-11-10 05:19:22.892264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:29.689 [2024-11-10 05:19:22.892273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:29.689 [2024-11-10 05:19:22.892280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:29.689 [2024-11-10 05:19:22.892288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.689 [2024-11-10 05:19:22.892330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:29.689 [2024-11-10 05:19:22.892339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:29.689 [2024-11-10 05:19:22.892346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:29.689 [2024-11-10 05:19:22.892356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.689 [2024-11-10 05:19:22.892478] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 49.606 ms, result 0 00:17:29.949 00:17:29.949 00:17:29.949 05:19:23 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=85868 00:17:29.949 05:19:23 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 85868 00:17:29.949 05:19:23 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 85868 ']' 00:17:29.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:29.949 05:19:23 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:29.949 05:19:23 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:29.949 05:19:23 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:29.949 05:19:23 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:29.949 05:19:23 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:29.949 05:19:23 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:17:29.949 [2024-11-10 05:19:23.156635] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:29.949 [2024-11-10 05:19:23.156755] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85868 ] 00:17:30.208 [2024-11-10 05:19:23.304733] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:30.208 [2024-11-10 05:19:23.337747] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:30.775 05:19:23 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:30.775 05:19:23 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:17:30.775 05:19:23 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:31.034 [2024-11-10 05:19:24.192783] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:31.034 [2024-11-10 05:19:24.192847] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:31.295 [2024-11-10 05:19:24.364870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.295 [2024-11-10 05:19:24.364917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:31.295 [2024-11-10 05:19:24.364930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:31.295 [2024-11-10 05:19:24.364940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.295 [2024-11-10 05:19:24.367159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.295 [2024-11-10 05:19:24.367198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:31.295 [2024-11-10 05:19:24.367208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.202 ms 00:17:31.295 [2024-11-10 05:19:24.367217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.295 [2024-11-10 05:19:24.367288] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:31.295 [2024-11-10 05:19:24.367530] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:31.295 [2024-11-10 05:19:24.367550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.295 [2024-11-10 05:19:24.367560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:31.295 [2024-11-10 05:19:24.367568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.269 ms 00:17:31.295 [2024-11-10 05:19:24.367577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.295 [2024-11-10 05:19:24.368672] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:31.295 [2024-11-10 05:19:24.371275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.295 [2024-11-10 05:19:24.371309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:31.295 [2024-11-10 05:19:24.371320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.600 ms 00:17:31.295 [2024-11-10 05:19:24.371328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.295 [2024-11-10 05:19:24.371387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.295 [2024-11-10 05:19:24.371397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:31.295 [2024-11-10 05:19:24.371408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:17:31.295 [2024-11-10 05:19:24.371415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.295 [2024-11-10 05:19:24.376254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.295 [2024-11-10 05:19:24.376284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:31.295 [2024-11-10 05:19:24.376295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.791 ms 00:17:31.295 [2024-11-10 05:19:24.376302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.295 [2024-11-10 05:19:24.376388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.295 [2024-11-10 05:19:24.376398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:31.295 [2024-11-10 05:19:24.376408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:17:31.295 [2024-11-10 05:19:24.376415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.295 [2024-11-10 05:19:24.376445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.295 [2024-11-10 05:19:24.376454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:31.295 [2024-11-10 05:19:24.376463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:31.295 [2024-11-10 05:19:24.376472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.295 [2024-11-10 05:19:24.376495] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:31.295 [2024-11-10 05:19:24.377813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.295 [2024-11-10 05:19:24.377850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:31.295 [2024-11-10 05:19:24.377859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.324 ms 00:17:31.295 [2024-11-10 05:19:24.377868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.295 [2024-11-10 05:19:24.377901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.295 [2024-11-10 05:19:24.377910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:31.295 [2024-11-10 05:19:24.377918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:31.295 [2024-11-10 05:19:24.377927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.295 [2024-11-10 05:19:24.377946] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:31.295 [2024-11-10 05:19:24.377964] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:31.295 [2024-11-10 05:19:24.378020] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:31.295 [2024-11-10 05:19:24.378051] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:31.295 [2024-11-10 05:19:24.378153] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:31.295 [2024-11-10 05:19:24.378174] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:31.295 [2024-11-10 05:19:24.378184] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:31.295 [2024-11-10 05:19:24.378195] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:31.295 [2024-11-10 05:19:24.378204] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:31.295 [2024-11-10 05:19:24.378215] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:31.295 [2024-11-10 05:19:24.378223] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:31.295 [2024-11-10 05:19:24.378234] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:31.295 [2024-11-10 05:19:24.378244] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:31.295 [2024-11-10 05:19:24.378259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.295 [2024-11-10 05:19:24.378268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:31.295 [2024-11-10 05:19:24.378277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.313 ms 00:17:31.295 [2024-11-10 05:19:24.378283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.295 [2024-11-10 05:19:24.378372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.295 [2024-11-10 05:19:24.378385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:31.295 [2024-11-10 05:19:24.378394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:31.296 [2024-11-10 05:19:24.378404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.296 [2024-11-10 05:19:24.378504] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:31.296 [2024-11-10 05:19:24.378514] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:31.296 [2024-11-10 05:19:24.378527] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:31.296 [2024-11-10 05:19:24.378535] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:31.296 [2024-11-10 05:19:24.378547] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:31.296 [2024-11-10 05:19:24.378555] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:31.296 [2024-11-10 05:19:24.378565] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:31.296 [2024-11-10 05:19:24.378573] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:31.296 [2024-11-10 05:19:24.378586] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:31.296 [2024-11-10 05:19:24.378594] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:31.296 [2024-11-10 05:19:24.378604] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:31.296 [2024-11-10 05:19:24.378611] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:31.296 [2024-11-10 05:19:24.378620] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:31.296 [2024-11-10 05:19:24.378627] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:31.296 [2024-11-10 05:19:24.378636] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:31.296 [2024-11-10 05:19:24.378644] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:31.296 [2024-11-10 05:19:24.378653] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:31.296 [2024-11-10 05:19:24.378660] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:31.296 [2024-11-10 05:19:24.378671] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:31.296 [2024-11-10 05:19:24.378679] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:31.296 [2024-11-10 05:19:24.378691] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:31.296 [2024-11-10 05:19:24.378698] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:31.296 [2024-11-10 05:19:24.378708] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:31.296 [2024-11-10 05:19:24.378715] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:31.296 [2024-11-10 05:19:24.378724] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:31.296 [2024-11-10 05:19:24.378731] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:31.296 [2024-11-10 05:19:24.378740] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:31.296 [2024-11-10 05:19:24.378747] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:31.296 [2024-11-10 05:19:24.378756] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:31.296 [2024-11-10 05:19:24.378764] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:31.296 [2024-11-10 05:19:24.378772] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:31.296 [2024-11-10 05:19:24.378780] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:31.296 [2024-11-10 05:19:24.378789] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:31.296 [2024-11-10 05:19:24.378796] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:31.296 [2024-11-10 05:19:24.378805] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:31.296 [2024-11-10 05:19:24.378813] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:31.296 [2024-11-10 05:19:24.378823] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:31.296 [2024-11-10 05:19:24.378831] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:31.296 [2024-11-10 05:19:24.378840] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:31.296 [2024-11-10 05:19:24.378847] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:31.296 [2024-11-10 05:19:24.378856] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:31.296 [2024-11-10 05:19:24.378863] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:31.296 [2024-11-10 05:19:24.378872] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:31.296 [2024-11-10 05:19:24.378880] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:31.296 [2024-11-10 05:19:24.378891] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:31.296 [2024-11-10 05:19:24.378899] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:31.296 [2024-11-10 05:19:24.378908] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:31.296 [2024-11-10 05:19:24.378917] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:31.296 [2024-11-10 05:19:24.378926] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:31.296 [2024-11-10 05:19:24.378933] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:31.296 [2024-11-10 05:19:24.378943] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:31.296 [2024-11-10 05:19:24.378950] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:31.296 [2024-11-10 05:19:24.378962] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:31.296 [2024-11-10 05:19:24.378971] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:31.296 [2024-11-10 05:19:24.378982] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:31.296 [2024-11-10 05:19:24.379008] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:31.296 [2024-11-10 05:19:24.379018] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:31.296 [2024-11-10 05:19:24.379027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:31.296 [2024-11-10 05:19:24.379035] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:31.296 [2024-11-10 05:19:24.379043] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:31.296 [2024-11-10 05:19:24.379052] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:31.296 [2024-11-10 05:19:24.379059] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:31.296 [2024-11-10 05:19:24.379068] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:31.296 [2024-11-10 05:19:24.379075] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:31.296 [2024-11-10 05:19:24.379084] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:31.296 [2024-11-10 05:19:24.379090] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:31.296 [2024-11-10 05:19:24.379099] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:31.296 [2024-11-10 05:19:24.379106] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:31.296 [2024-11-10 05:19:24.379116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:31.296 [2024-11-10 05:19:24.379124] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:31.296 [2024-11-10 05:19:24.379135] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:31.296 [2024-11-10 05:19:24.379144] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:31.296 [2024-11-10 05:19:24.379153] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:31.296 [2024-11-10 05:19:24.379160] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:31.296 [2024-11-10 05:19:24.379169] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:31.296 [2024-11-10 05:19:24.379176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.296 [2024-11-10 05:19:24.379189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:31.296 [2024-11-10 05:19:24.379197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.742 ms 00:17:31.296 [2024-11-10 05:19:24.379205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.296 [2024-11-10 05:19:24.387782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.296 [2024-11-10 05:19:24.387820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:31.296 [2024-11-10 05:19:24.387830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.515 ms 00:17:31.296 [2024-11-10 05:19:24.387839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.296 [2024-11-10 05:19:24.387957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.296 [2024-11-10 05:19:24.387972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:31.296 [2024-11-10 05:19:24.387982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:17:31.296 [2024-11-10 05:19:24.388003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.296 [2024-11-10 05:19:24.396122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.296 [2024-11-10 05:19:24.396158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:31.296 [2024-11-10 05:19:24.396167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.098 ms 00:17:31.296 [2024-11-10 05:19:24.396176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.296 [2024-11-10 05:19:24.396223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.296 [2024-11-10 05:19:24.396236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:31.296 [2024-11-10 05:19:24.396244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:31.296 [2024-11-10 05:19:24.396253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.296 [2024-11-10 05:19:24.396558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.296 [2024-11-10 05:19:24.396586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:31.296 [2024-11-10 05:19:24.396595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.286 ms 00:17:31.296 [2024-11-10 05:19:24.396603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.296 [2024-11-10 05:19:24.396730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.297 [2024-11-10 05:19:24.396742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:31.297 [2024-11-10 05:19:24.396753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:17:31.297 [2024-11-10 05:19:24.396763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.297 [2024-11-10 05:19:24.410253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.297 [2024-11-10 05:19:24.410299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:31.297 [2024-11-10 05:19:24.410311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.467 ms 00:17:31.297 [2024-11-10 05:19:24.410320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.297 [2024-11-10 05:19:24.413052] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:31.297 [2024-11-10 05:19:24.413097] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:31.297 [2024-11-10 05:19:24.413110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.297 [2024-11-10 05:19:24.413121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:31.297 [2024-11-10 05:19:24.413132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.679 ms 00:17:31.297 [2024-11-10 05:19:24.413142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.297 [2024-11-10 05:19:24.429086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.297 [2024-11-10 05:19:24.429123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:31.297 [2024-11-10 05:19:24.429133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.897 ms 00:17:31.297 [2024-11-10 05:19:24.429145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.297 [2024-11-10 05:19:24.431174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.297 [2024-11-10 05:19:24.431207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:31.297 [2024-11-10 05:19:24.431216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.964 ms 00:17:31.297 [2024-11-10 05:19:24.431225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.297 [2024-11-10 05:19:24.433005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.297 [2024-11-10 05:19:24.433035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:31.297 [2024-11-10 05:19:24.433044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.744 ms 00:17:31.297 [2024-11-10 05:19:24.433052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.297 [2024-11-10 05:19:24.433360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.297 [2024-11-10 05:19:24.433379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:31.297 [2024-11-10 05:19:24.433388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.247 ms 00:17:31.297 [2024-11-10 05:19:24.433397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.297 [2024-11-10 05:19:24.449590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.297 [2024-11-10 05:19:24.449637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:31.297 [2024-11-10 05:19:24.449647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.156 ms 00:17:31.297 [2024-11-10 05:19:24.449658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.297 [2024-11-10 05:19:24.457238] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:31.297 [2024-11-10 05:19:24.471252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.297 [2024-11-10 05:19:24.471289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:31.297 [2024-11-10 05:19:24.471301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.534 ms 00:17:31.297 [2024-11-10 05:19:24.471308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.297 [2024-11-10 05:19:24.471394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.297 [2024-11-10 05:19:24.471404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:31.297 [2024-11-10 05:19:24.471415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:31.297 [2024-11-10 05:19:24.471424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.297 [2024-11-10 05:19:24.471471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.297 [2024-11-10 05:19:24.471479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:31.297 [2024-11-10 05:19:24.471497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:17:31.297 [2024-11-10 05:19:24.471504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.297 [2024-11-10 05:19:24.471533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.297 [2024-11-10 05:19:24.471541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:31.297 [2024-11-10 05:19:24.471551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:31.297 [2024-11-10 05:19:24.471558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.297 [2024-11-10 05:19:24.471589] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:31.297 [2024-11-10 05:19:24.471598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.297 [2024-11-10 05:19:24.471607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:31.297 [2024-11-10 05:19:24.471614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:31.297 [2024-11-10 05:19:24.471622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.297 [2024-11-10 05:19:24.475482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.297 [2024-11-10 05:19:24.475519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:31.297 [2024-11-10 05:19:24.475528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.840 ms 00:17:31.297 [2024-11-10 05:19:24.475537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.297 [2024-11-10 05:19:24.475622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.297 [2024-11-10 05:19:24.475634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:31.297 [2024-11-10 05:19:24.475643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:17:31.297 [2024-11-10 05:19:24.475652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.297 [2024-11-10 05:19:24.476683] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:31.297 [2024-11-10 05:19:24.477697] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 111.549 ms, result 0 00:17:31.297 [2024-11-10 05:19:24.479934] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:31.297 Some configs were skipped because the RPC state that can call them passed over. 00:17:31.297 05:19:24 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:17:31.556 [2024-11-10 05:19:24.703035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.556 [2024-11-10 05:19:24.703078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:31.556 [2024-11-10 05:19:24.703091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.061 ms 00:17:31.556 [2024-11-10 05:19:24.703099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.556 [2024-11-10 05:19:24.703132] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.166 ms, result 0 00:17:31.556 true 00:17:31.556 05:19:24 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:17:31.815 [2024-11-10 05:19:24.906676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.815 [2024-11-10 05:19:24.906719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:31.815 [2024-11-10 05:19:24.906729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.429 ms 00:17:31.815 [2024-11-10 05:19:24.906738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.815 [2024-11-10 05:19:24.906769] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.522 ms, result 0 00:17:31.815 true 00:17:31.815 05:19:24 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 85868 00:17:31.815 05:19:24 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 85868 ']' 00:17:31.815 05:19:24 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 85868 00:17:31.815 05:19:24 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:17:31.815 05:19:24 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:31.815 05:19:24 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85868 00:17:31.815 05:19:24 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:31.815 killing process with pid 85868 00:17:31.815 05:19:24 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:31.815 05:19:24 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85868' 00:17:31.815 05:19:24 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 85868 00:17:31.815 05:19:24 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 85868 00:17:31.815 [2024-11-10 05:19:25.031372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.815 [2024-11-10 05:19:25.031428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:31.815 [2024-11-10 05:19:25.031443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:31.815 [2024-11-10 05:19:25.031451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.815 [2024-11-10 05:19:25.031476] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:31.815 [2024-11-10 05:19:25.031921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.815 [2024-11-10 05:19:25.031947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:31.815 [2024-11-10 05:19:25.031960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.432 ms 00:17:31.815 [2024-11-10 05:19:25.031970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.815 [2024-11-10 05:19:25.032261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.815 [2024-11-10 05:19:25.032280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:31.815 [2024-11-10 05:19:25.032290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.251 ms 00:17:31.815 [2024-11-10 05:19:25.032300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.815 [2024-11-10 05:19:25.036802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.815 [2024-11-10 05:19:25.036837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:31.815 [2024-11-10 05:19:25.036847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.483 ms 00:17:31.815 [2024-11-10 05:19:25.036859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.815 [2024-11-10 05:19:25.043822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.815 [2024-11-10 05:19:25.043859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:31.815 [2024-11-10 05:19:25.043869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.929 ms 00:17:31.815 [2024-11-10 05:19:25.043884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.815 [2024-11-10 05:19:25.046215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.815 [2024-11-10 05:19:25.046250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:31.815 [2024-11-10 05:19:25.046259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.254 ms 00:17:31.815 [2024-11-10 05:19:25.046267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.075 [2024-11-10 05:19:25.050175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.075 [2024-11-10 05:19:25.050213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:32.075 [2024-11-10 05:19:25.050222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.874 ms 00:17:32.075 [2024-11-10 05:19:25.050231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.075 [2024-11-10 05:19:25.050361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.075 [2024-11-10 05:19:25.050373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:32.075 [2024-11-10 05:19:25.050381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:17:32.075 [2024-11-10 05:19:25.050389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.075 [2024-11-10 05:19:25.052826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.075 [2024-11-10 05:19:25.052879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:32.075 [2024-11-10 05:19:25.052890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.417 ms 00:17:32.075 [2024-11-10 05:19:25.052901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.075 [2024-11-10 05:19:25.055206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.075 [2024-11-10 05:19:25.055241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:32.075 [2024-11-10 05:19:25.055250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.264 ms 00:17:32.075 [2024-11-10 05:19:25.055258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.075 [2024-11-10 05:19:25.057006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.075 [2024-11-10 05:19:25.057043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:32.075 [2024-11-10 05:19:25.057052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.714 ms 00:17:32.075 [2024-11-10 05:19:25.057062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.075 [2024-11-10 05:19:25.058770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.075 [2024-11-10 05:19:25.058804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:32.075 [2024-11-10 05:19:25.058813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.649 ms 00:17:32.075 [2024-11-10 05:19:25.058822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.075 [2024-11-10 05:19:25.058854] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:32.075 [2024-11-10 05:19:25.058872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:32.075 [2024-11-10 05:19:25.058883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:32.075 [2024-11-10 05:19:25.058899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:32.075 [2024-11-10 05:19:25.058907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:32.075 [2024-11-10 05:19:25.058919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:32.076 [2024-11-10 05:19:25.058928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:32.076 [2024-11-10 05:19:25.058938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:32.076 [2024-11-10 05:19:25.058946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:32.076 [2024-11-10 05:19:25.058956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:32.076 [2024-11-10 05:19:25.058965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:32.076 [2024-11-10 05:19:25.058976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:32.076 [2024-11-10 05:19:25.058984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:32.076 [2024-11-10 05:19:25.059008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:32.076 [2024-11-10 05:19:25.059016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:32.076 [2024-11-10 05:19:25.059026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:32.076 [2024-11-10 05:19:25.059035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:32.076 [2024-11-10 05:19:25.059045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:32.076 [2024-11-10 05:19:25.059053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:32.076 [2024-11-10 05:19:25.059065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:32.076 [2024-11-10 05:19:25.059074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:32.076 [2024-11-10 05:19:25.059083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:32.076 [2024-11-10 05:19:25.059092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:32.076 [2024-11-10 05:19:25.059102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:32.076 [2024-11-10 05:19:25.059111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:32.076 [2024-11-10 05:19:25.059121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:32.076 [2024-11-10 05:19:25.059129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:32.076 [2024-11-10 05:19:25.059141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:32.076 [2024-11-10 05:19:25.059149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:32.076 [2024-11-10 05:19:25.059160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:32.076 [2024-11-10 05:19:25.059168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:32.076 [2024-11-10 05:19:25.059180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:32.076 [2024-11-10 05:19:25.059188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:32.076 [2024-11-10 05:19:25.059199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:32.076 [2024-11-10 05:19:25.059209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:32.076 [2024-11-10 05:19:25.059221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:32.076 [2024-11-10 05:19:25.059228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:32.076 [2024-11-10 05:19:25.059237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:32.076 [2024-11-10 05:19:25.059245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:32.076 [2024-11-10 05:19:25.059254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:32.076 [2024-11-10 05:19:25.059261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:32.076 [2024-11-10 05:19:25.059270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:32.076 [2024-11-10 05:19:25.059278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:32.076 [2024-11-10 05:19:25.059287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:32.076 [2024-11-10 05:19:25.059294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:32.076 [2024-11-10 05:19:25.059303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:32.076 [2024-11-10 05:19:25.059311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:32.076 [2024-11-10 05:19:25.059320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:32.076 [2024-11-10 05:19:25.059327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:32.076 [2024-11-10 05:19:25.059336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:32.076 [2024-11-10 05:19:25.059343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:32.076 [2024-11-10 05:19:25.059354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:32.076 [2024-11-10 05:19:25.059361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:32.076 [2024-11-10 05:19:25.059370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:32.076 [2024-11-10 05:19:25.059377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:32.076 [2024-11-10 05:19:25.059385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:32.076 [2024-11-10 05:19:25.059393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:32.076 [2024-11-10 05:19:25.059403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:32.076 [2024-11-10 05:19:25.059410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:32.076 [2024-11-10 05:19:25.059419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:32.076 [2024-11-10 05:19:25.059427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:32.076 [2024-11-10 05:19:25.059435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:32.076 [2024-11-10 05:19:25.059442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:32.076 [2024-11-10 05:19:25.059451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:32.076 [2024-11-10 05:19:25.059459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:32.076 [2024-11-10 05:19:25.059469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:32.076 [2024-11-10 05:19:25.059477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:32.076 [2024-11-10 05:19:25.059488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:32.076 [2024-11-10 05:19:25.059496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:32.076 [2024-11-10 05:19:25.059505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:32.076 [2024-11-10 05:19:25.059512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:32.076 [2024-11-10 05:19:25.059521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:32.076 [2024-11-10 05:19:25.059529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:32.076 [2024-11-10 05:19:25.059538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:32.076 [2024-11-10 05:19:25.059545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:32.076 [2024-11-10 05:19:25.059554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:32.076 [2024-11-10 05:19:25.059561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:32.076 [2024-11-10 05:19:25.059570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:32.076 [2024-11-10 05:19:25.059577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:32.076 [2024-11-10 05:19:25.059585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:32.076 [2024-11-10 05:19:25.059593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:32.076 [2024-11-10 05:19:25.059601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:32.076 [2024-11-10 05:19:25.059608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:32.076 [2024-11-10 05:19:25.059620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:32.076 [2024-11-10 05:19:25.059627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:32.076 [2024-11-10 05:19:25.059636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:32.076 [2024-11-10 05:19:25.059644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:32.076 [2024-11-10 05:19:25.059652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:32.076 [2024-11-10 05:19:25.059665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:32.076 [2024-11-10 05:19:25.059674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:32.076 [2024-11-10 05:19:25.059681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:32.076 [2024-11-10 05:19:25.059690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:32.076 [2024-11-10 05:19:25.059698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:32.076 [2024-11-10 05:19:25.059707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:32.076 [2024-11-10 05:19:25.059714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:32.076 [2024-11-10 05:19:25.059723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:32.077 [2024-11-10 05:19:25.059730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:32.077 [2024-11-10 05:19:25.059740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:32.077 [2024-11-10 05:19:25.059748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:32.077 [2024-11-10 05:19:25.059758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:32.077 [2024-11-10 05:19:25.059766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:32.077 [2024-11-10 05:19:25.059783] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:32.077 [2024-11-10 05:19:25.059796] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: cc1e5a32-ec08-4380-949d-289ffcdd7e88 00:17:32.077 [2024-11-10 05:19:25.059805] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:32.077 [2024-11-10 05:19:25.059813] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:32.077 [2024-11-10 05:19:25.059821] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:32.077 [2024-11-10 05:19:25.059831] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:32.077 [2024-11-10 05:19:25.059840] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:32.077 [2024-11-10 05:19:25.059848] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:32.077 [2024-11-10 05:19:25.059859] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:32.077 [2024-11-10 05:19:25.059865] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:32.077 [2024-11-10 05:19:25.059873] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:32.077 [2024-11-10 05:19:25.059880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.077 [2024-11-10 05:19:25.059889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:32.077 [2024-11-10 05:19:25.059897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.027 ms 00:17:32.077 [2024-11-10 05:19:25.059907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.077 [2024-11-10 05:19:25.061345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.077 [2024-11-10 05:19:25.061375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:32.077 [2024-11-10 05:19:25.061384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.392 ms 00:17:32.077 [2024-11-10 05:19:25.061393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.077 [2024-11-10 05:19:25.061468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.077 [2024-11-10 05:19:25.061478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:32.077 [2024-11-10 05:19:25.061486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:17:32.077 [2024-11-10 05:19:25.061495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.077 [2024-11-10 05:19:25.066599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.077 [2024-11-10 05:19:25.066635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:32.077 [2024-11-10 05:19:25.066649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.077 [2024-11-10 05:19:25.066658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.077 [2024-11-10 05:19:25.066735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.077 [2024-11-10 05:19:25.066747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:32.077 [2024-11-10 05:19:25.066754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.077 [2024-11-10 05:19:25.066765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.077 [2024-11-10 05:19:25.066801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.077 [2024-11-10 05:19:25.066817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:32.077 [2024-11-10 05:19:25.066828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.077 [2024-11-10 05:19:25.066836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.077 [2024-11-10 05:19:25.066852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.077 [2024-11-10 05:19:25.066862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:32.077 [2024-11-10 05:19:25.066868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.077 [2024-11-10 05:19:25.066876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.077 [2024-11-10 05:19:25.075656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.077 [2024-11-10 05:19:25.075703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:32.077 [2024-11-10 05:19:25.075712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.077 [2024-11-10 05:19:25.075722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.077 [2024-11-10 05:19:25.082556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.077 [2024-11-10 05:19:25.082595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:32.077 [2024-11-10 05:19:25.082604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.077 [2024-11-10 05:19:25.082615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.077 [2024-11-10 05:19:25.082667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.077 [2024-11-10 05:19:25.082678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:32.077 [2024-11-10 05:19:25.082686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.077 [2024-11-10 05:19:25.082697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.077 [2024-11-10 05:19:25.082726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.077 [2024-11-10 05:19:25.082736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:32.077 [2024-11-10 05:19:25.082744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.077 [2024-11-10 05:19:25.082753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.077 [2024-11-10 05:19:25.082813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.077 [2024-11-10 05:19:25.082824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:32.077 [2024-11-10 05:19:25.082832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.077 [2024-11-10 05:19:25.082843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.077 [2024-11-10 05:19:25.082870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.077 [2024-11-10 05:19:25.082880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:32.077 [2024-11-10 05:19:25.082888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.077 [2024-11-10 05:19:25.082898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.077 [2024-11-10 05:19:25.082940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.077 [2024-11-10 05:19:25.082950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:32.077 [2024-11-10 05:19:25.082957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.077 [2024-11-10 05:19:25.082966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.077 [2024-11-10 05:19:25.083065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.077 [2024-11-10 05:19:25.083084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:32.077 [2024-11-10 05:19:25.083092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.077 [2024-11-10 05:19:25.083101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.077 [2024-11-10 05:19:25.083228] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 51.835 ms, result 0 00:17:32.077 05:19:25 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:32.337 [2024-11-10 05:19:25.343747] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:32.337 [2024-11-10 05:19:25.343863] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85904 ] 00:17:32.337 [2024-11-10 05:19:25.492311] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:32.337 [2024-11-10 05:19:25.526617] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:32.598 [2024-11-10 05:19:25.616775] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:32.598 [2024-11-10 05:19:25.616846] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:32.598 [2024-11-10 05:19:25.773775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.598 [2024-11-10 05:19:25.773831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:32.598 [2024-11-10 05:19:25.773844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:32.598 [2024-11-10 05:19:25.773852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.598 [2024-11-10 05:19:25.776216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.598 [2024-11-10 05:19:25.776254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:32.598 [2024-11-10 05:19:25.776270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.344 ms 00:17:32.598 [2024-11-10 05:19:25.776277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.598 [2024-11-10 05:19:25.776362] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:32.598 [2024-11-10 05:19:25.776885] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:32.598 [2024-11-10 05:19:25.776939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.598 [2024-11-10 05:19:25.776950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:32.598 [2024-11-10 05:19:25.776963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.584 ms 00:17:32.598 [2024-11-10 05:19:25.776971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.598 [2024-11-10 05:19:25.778424] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:32.598 [2024-11-10 05:19:25.781405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.598 [2024-11-10 05:19:25.781454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:32.598 [2024-11-10 05:19:25.781465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.983 ms 00:17:32.598 [2024-11-10 05:19:25.781476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.598 [2024-11-10 05:19:25.781545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.598 [2024-11-10 05:19:25.781555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:32.598 [2024-11-10 05:19:25.781564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:17:32.599 [2024-11-10 05:19:25.781571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.599 [2024-11-10 05:19:25.787948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.599 [2024-11-10 05:19:25.787982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:32.599 [2024-11-10 05:19:25.788004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.339 ms 00:17:32.599 [2024-11-10 05:19:25.788012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.599 [2024-11-10 05:19:25.788134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.599 [2024-11-10 05:19:25.788148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:32.599 [2024-11-10 05:19:25.788157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:17:32.599 [2024-11-10 05:19:25.788168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.599 [2024-11-10 05:19:25.788195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.599 [2024-11-10 05:19:25.788207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:32.599 [2024-11-10 05:19:25.788216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:32.599 [2024-11-10 05:19:25.788223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.599 [2024-11-10 05:19:25.788245] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:32.599 [2024-11-10 05:19:25.790013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.599 [2024-11-10 05:19:25.790043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:32.599 [2024-11-10 05:19:25.790052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.775 ms 00:17:32.599 [2024-11-10 05:19:25.790060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.599 [2024-11-10 05:19:25.790115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.599 [2024-11-10 05:19:25.790127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:32.599 [2024-11-10 05:19:25.790138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:32.599 [2024-11-10 05:19:25.790145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.599 [2024-11-10 05:19:25.790164] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:32.599 [2024-11-10 05:19:25.790182] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:32.599 [2024-11-10 05:19:25.790217] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:32.599 [2024-11-10 05:19:25.790232] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:32.599 [2024-11-10 05:19:25.790337] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:32.599 [2024-11-10 05:19:25.790347] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:32.599 [2024-11-10 05:19:25.790358] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:32.599 [2024-11-10 05:19:25.790368] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:32.599 [2024-11-10 05:19:25.790377] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:32.599 [2024-11-10 05:19:25.790388] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:32.599 [2024-11-10 05:19:25.790395] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:32.599 [2024-11-10 05:19:25.790403] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:32.599 [2024-11-10 05:19:25.790410] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:32.599 [2024-11-10 05:19:25.790418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.599 [2024-11-10 05:19:25.790428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:32.599 [2024-11-10 05:19:25.790437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.256 ms 00:17:32.599 [2024-11-10 05:19:25.790445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.599 [2024-11-10 05:19:25.790532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.599 [2024-11-10 05:19:25.790540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:32.599 [2024-11-10 05:19:25.790548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:32.599 [2024-11-10 05:19:25.790555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.599 [2024-11-10 05:19:25.790656] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:32.599 [2024-11-10 05:19:25.790672] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:32.599 [2024-11-10 05:19:25.790681] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:32.599 [2024-11-10 05:19:25.790692] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:32.599 [2024-11-10 05:19:25.790704] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:32.599 [2024-11-10 05:19:25.790713] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:32.599 [2024-11-10 05:19:25.790721] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:32.599 [2024-11-10 05:19:25.790729] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:32.599 [2024-11-10 05:19:25.790739] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:32.599 [2024-11-10 05:19:25.790747] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:32.599 [2024-11-10 05:19:25.790756] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:32.599 [2024-11-10 05:19:25.790763] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:32.599 [2024-11-10 05:19:25.790771] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:32.599 [2024-11-10 05:19:25.790779] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:32.599 [2024-11-10 05:19:25.790788] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:32.599 [2024-11-10 05:19:25.790796] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:32.599 [2024-11-10 05:19:25.790804] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:32.599 [2024-11-10 05:19:25.790812] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:32.599 [2024-11-10 05:19:25.790820] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:32.599 [2024-11-10 05:19:25.790828] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:32.599 [2024-11-10 05:19:25.790835] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:32.599 [2024-11-10 05:19:25.790843] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:32.599 [2024-11-10 05:19:25.790850] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:32.599 [2024-11-10 05:19:25.790858] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:32.599 [2024-11-10 05:19:25.790871] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:32.599 [2024-11-10 05:19:25.790880] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:32.599 [2024-11-10 05:19:25.790888] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:32.599 [2024-11-10 05:19:25.790895] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:32.599 [2024-11-10 05:19:25.790902] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:32.599 [2024-11-10 05:19:25.790910] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:32.599 [2024-11-10 05:19:25.790917] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:32.599 [2024-11-10 05:19:25.790925] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:32.599 [2024-11-10 05:19:25.790933] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:32.599 [2024-11-10 05:19:25.790940] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:32.599 [2024-11-10 05:19:25.790947] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:32.599 [2024-11-10 05:19:25.790955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:32.599 [2024-11-10 05:19:25.790962] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:32.599 [2024-11-10 05:19:25.790970] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:32.599 [2024-11-10 05:19:25.790977] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:32.599 [2024-11-10 05:19:25.790984] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:32.599 [2024-11-10 05:19:25.791008] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:32.599 [2024-11-10 05:19:25.791016] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:32.599 [2024-11-10 05:19:25.791023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:32.599 [2024-11-10 05:19:25.791030] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:32.599 [2024-11-10 05:19:25.791039] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:32.599 [2024-11-10 05:19:25.791048] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:32.599 [2024-11-10 05:19:25.791058] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:32.599 [2024-11-10 05:19:25.791067] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:32.599 [2024-11-10 05:19:25.791075] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:32.599 [2024-11-10 05:19:25.791082] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:32.599 [2024-11-10 05:19:25.791090] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:32.599 [2024-11-10 05:19:25.791098] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:32.599 [2024-11-10 05:19:25.791106] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:32.599 [2024-11-10 05:19:25.791115] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:32.599 [2024-11-10 05:19:25.791125] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:32.599 [2024-11-10 05:19:25.791135] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:32.599 [2024-11-10 05:19:25.791146] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:32.599 [2024-11-10 05:19:25.791155] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:32.599 [2024-11-10 05:19:25.791163] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:32.599 [2024-11-10 05:19:25.791171] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:32.599 [2024-11-10 05:19:25.791179] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:32.600 [2024-11-10 05:19:25.791188] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:32.600 [2024-11-10 05:19:25.791196] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:32.600 [2024-11-10 05:19:25.791204] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:32.600 [2024-11-10 05:19:25.791212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:32.600 [2024-11-10 05:19:25.791219] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:32.600 [2024-11-10 05:19:25.791227] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:32.600 [2024-11-10 05:19:25.791233] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:32.600 [2024-11-10 05:19:25.791241] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:32.600 [2024-11-10 05:19:25.791248] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:32.600 [2024-11-10 05:19:25.791256] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:32.600 [2024-11-10 05:19:25.791264] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:32.600 [2024-11-10 05:19:25.791273] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:32.600 [2024-11-10 05:19:25.791280] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:32.600 [2024-11-10 05:19:25.791287] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:32.600 [2024-11-10 05:19:25.791295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.600 [2024-11-10 05:19:25.791302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:32.600 [2024-11-10 05:19:25.791311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.707 ms 00:17:32.600 [2024-11-10 05:19:25.791321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.600 [2024-11-10 05:19:25.813494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.600 [2024-11-10 05:19:25.813556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:32.600 [2024-11-10 05:19:25.813575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.118 ms 00:17:32.600 [2024-11-10 05:19:25.813586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.600 [2024-11-10 05:19:25.813789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.600 [2024-11-10 05:19:25.813807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:32.600 [2024-11-10 05:19:25.813829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:17:32.600 [2024-11-10 05:19:25.813843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.600 [2024-11-10 05:19:25.824339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.600 [2024-11-10 05:19:25.824384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:32.600 [2024-11-10 05:19:25.824400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.465 ms 00:17:32.600 [2024-11-10 05:19:25.824408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.600 [2024-11-10 05:19:25.824477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.600 [2024-11-10 05:19:25.824487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:32.600 [2024-11-10 05:19:25.824499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:32.600 [2024-11-10 05:19:25.824507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.600 [2024-11-10 05:19:25.824964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.600 [2024-11-10 05:19:25.825015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:32.600 [2024-11-10 05:19:25.825027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.438 ms 00:17:32.600 [2024-11-10 05:19:25.825035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.600 [2024-11-10 05:19:25.825194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.600 [2024-11-10 05:19:25.825205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:32.600 [2024-11-10 05:19:25.825218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:17:32.600 [2024-11-10 05:19:25.825229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.861 [2024-11-10 05:19:25.831802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.861 [2024-11-10 05:19:25.831856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:32.861 [2024-11-10 05:19:25.831866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.546 ms 00:17:32.861 [2024-11-10 05:19:25.831873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.861 [2024-11-10 05:19:25.835424] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:32.861 [2024-11-10 05:19:25.835477] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:32.862 [2024-11-10 05:19:25.835490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.862 [2024-11-10 05:19:25.835498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:32.862 [2024-11-10 05:19:25.835507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.506 ms 00:17:32.862 [2024-11-10 05:19:25.835513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.862 [2024-11-10 05:19:25.851040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.862 [2024-11-10 05:19:25.851097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:32.862 [2024-11-10 05:19:25.851110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.453 ms 00:17:32.862 [2024-11-10 05:19:25.851118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.862 [2024-11-10 05:19:25.853776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.862 [2024-11-10 05:19:25.853826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:32.862 [2024-11-10 05:19:25.853836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.571 ms 00:17:32.862 [2024-11-10 05:19:25.853843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.862 [2024-11-10 05:19:25.856508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.862 [2024-11-10 05:19:25.856554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:32.862 [2024-11-10 05:19:25.856573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.610 ms 00:17:32.862 [2024-11-10 05:19:25.856580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.862 [2024-11-10 05:19:25.856921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.862 [2024-11-10 05:19:25.856934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:32.862 [2024-11-10 05:19:25.856944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.261 ms 00:17:32.862 [2024-11-10 05:19:25.856959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.862 [2024-11-10 05:19:25.879306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.862 [2024-11-10 05:19:25.879364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:32.862 [2024-11-10 05:19:25.879377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.323 ms 00:17:32.862 [2024-11-10 05:19:25.879392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.862 [2024-11-10 05:19:25.887498] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:32.862 [2024-11-10 05:19:25.905616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.862 [2024-11-10 05:19:25.905665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:32.862 [2024-11-10 05:19:25.905683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.136 ms 00:17:32.862 [2024-11-10 05:19:25.905691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.862 [2024-11-10 05:19:25.905776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.862 [2024-11-10 05:19:25.905788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:32.862 [2024-11-10 05:19:25.905800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:32.862 [2024-11-10 05:19:25.905809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.862 [2024-11-10 05:19:25.905867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.862 [2024-11-10 05:19:25.905877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:32.862 [2024-11-10 05:19:25.905885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:17:32.862 [2024-11-10 05:19:25.905898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.862 [2024-11-10 05:19:25.905919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.862 [2024-11-10 05:19:25.905927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:32.862 [2024-11-10 05:19:25.905936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:32.862 [2024-11-10 05:19:25.905947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.862 [2024-11-10 05:19:25.905981] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:32.862 [2024-11-10 05:19:25.906020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.862 [2024-11-10 05:19:25.906029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:32.862 [2024-11-10 05:19:25.906037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:17:32.862 [2024-11-10 05:19:25.906051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.862 [2024-11-10 05:19:25.911612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.862 [2024-11-10 05:19:25.911663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:32.862 [2024-11-10 05:19:25.911675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.538 ms 00:17:32.862 [2024-11-10 05:19:25.911682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.862 [2024-11-10 05:19:25.911776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.862 [2024-11-10 05:19:25.911789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:32.862 [2024-11-10 05:19:25.911798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:17:32.862 [2024-11-10 05:19:25.911807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.862 [2024-11-10 05:19:25.912794] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:32.862 [2024-11-10 05:19:25.914097] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 138.704 ms, result 0 00:17:32.862 [2024-11-10 05:19:25.915469] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:32.862 [2024-11-10 05:19:25.922676] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:33.804  [2024-11-10T05:19:27.984Z] Copying: 14/256 [MB] (14 MBps) [2024-11-10T05:19:29.369Z] Copying: 25/256 [MB] (10 MBps) [2024-11-10T05:19:30.311Z] Copying: 37/256 [MB] (11 MBps) [2024-11-10T05:19:31.253Z] Copying: 48/256 [MB] (11 MBps) [2024-11-10T05:19:32.196Z] Copying: 60/256 [MB] (11 MBps) [2024-11-10T05:19:33.138Z] Copying: 71/256 [MB] (11 MBps) [2024-11-10T05:19:34.082Z] Copying: 82/256 [MB] (11 MBps) [2024-11-10T05:19:35.025Z] Copying: 101/256 [MB] (18 MBps) [2024-11-10T05:19:36.411Z] Copying: 114/256 [MB] (13 MBps) [2024-11-10T05:19:37.352Z] Copying: 133/256 [MB] (19 MBps) [2024-11-10T05:19:38.296Z] Copying: 158/256 [MB] (25 MBps) [2024-11-10T05:19:39.237Z] Copying: 176/256 [MB] (17 MBps) [2024-11-10T05:19:40.179Z] Copying: 191/256 [MB] (15 MBps) [2024-11-10T05:19:41.123Z] Copying: 207/256 [MB] (15 MBps) [2024-11-10T05:19:42.067Z] Copying: 226/256 [MB] (18 MBps) [2024-11-10T05:19:42.640Z] Copying: 245/256 [MB] (19 MBps) [2024-11-10T05:19:42.903Z] Copying: 256/256 [MB] (average 15 MBps)[2024-11-10 05:19:42.820376] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:49.667 [2024-11-10 05:19:42.822439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.667 [2024-11-10 05:19:42.822487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:49.667 [2024-11-10 05:19:42.822511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:49.667 [2024-11-10 05:19:42.822521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.667 [2024-11-10 05:19:42.822545] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:49.667 [2024-11-10 05:19:42.823284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.667 [2024-11-10 05:19:42.823324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:49.667 [2024-11-10 05:19:42.823336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.724 ms 00:17:49.667 [2024-11-10 05:19:42.823345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.667 [2024-11-10 05:19:42.823636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.667 [2024-11-10 05:19:42.823647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:49.667 [2024-11-10 05:19:42.823656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:17:49.667 [2024-11-10 05:19:42.823664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.667 [2024-11-10 05:19:42.828146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.667 [2024-11-10 05:19:42.828169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:49.667 [2024-11-10 05:19:42.828182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.460 ms 00:17:49.667 [2024-11-10 05:19:42.828192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.667 [2024-11-10 05:19:42.836759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.667 [2024-11-10 05:19:42.836797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:49.667 [2024-11-10 05:19:42.836808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.539 ms 00:17:49.667 [2024-11-10 05:19:42.836817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.667 [2024-11-10 05:19:42.839683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.667 [2024-11-10 05:19:42.839734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:49.667 [2024-11-10 05:19:42.839744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.775 ms 00:17:49.667 [2024-11-10 05:19:42.839764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.667 [2024-11-10 05:19:42.844234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.667 [2024-11-10 05:19:42.844281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:49.667 [2024-11-10 05:19:42.844300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.418 ms 00:17:49.667 [2024-11-10 05:19:42.844309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.667 [2024-11-10 05:19:42.844449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.667 [2024-11-10 05:19:42.844460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:49.667 [2024-11-10 05:19:42.844470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:17:49.667 [2024-11-10 05:19:42.844479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.667 [2024-11-10 05:19:42.847926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.667 [2024-11-10 05:19:42.848016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:49.668 [2024-11-10 05:19:42.848028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.427 ms 00:17:49.668 [2024-11-10 05:19:42.848036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.668 [2024-11-10 05:19:42.850952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.668 [2024-11-10 05:19:42.851006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:49.668 [2024-11-10 05:19:42.851017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.869 ms 00:17:49.668 [2024-11-10 05:19:42.851024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.668 [2024-11-10 05:19:42.853321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.668 [2024-11-10 05:19:42.853364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:49.668 [2024-11-10 05:19:42.853374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.250 ms 00:17:49.668 [2024-11-10 05:19:42.853381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.668 [2024-11-10 05:19:42.854938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.668 [2024-11-10 05:19:42.854983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:49.668 [2024-11-10 05:19:42.855010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.478 ms 00:17:49.668 [2024-11-10 05:19:42.855018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.668 [2024-11-10 05:19:42.855062] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:49.668 [2024-11-10 05:19:42.855085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:49.668 [2024-11-10 05:19:42.855096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:49.668 [2024-11-10 05:19:42.855105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:49.668 [2024-11-10 05:19:42.855113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:49.668 [2024-11-10 05:19:42.855122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:49.668 [2024-11-10 05:19:42.855130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:49.668 [2024-11-10 05:19:42.855138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:49.668 [2024-11-10 05:19:42.855147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:49.668 [2024-11-10 05:19:42.855155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:49.668 [2024-11-10 05:19:42.855163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:49.668 [2024-11-10 05:19:42.855171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:49.668 [2024-11-10 05:19:42.855179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:49.668 [2024-11-10 05:19:42.855187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:49.668 [2024-11-10 05:19:42.855195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:49.668 [2024-11-10 05:19:42.855204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:49.668 [2024-11-10 05:19:42.855213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:49.668 [2024-11-10 05:19:42.855221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:49.668 [2024-11-10 05:19:42.855229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:49.668 [2024-11-10 05:19:42.855237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:49.668 [2024-11-10 05:19:42.855244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:49.668 [2024-11-10 05:19:42.855252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:49.668 [2024-11-10 05:19:42.855260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:49.668 [2024-11-10 05:19:42.855267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:49.668 [2024-11-10 05:19:42.855275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:49.668 [2024-11-10 05:19:42.855283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:49.668 [2024-11-10 05:19:42.855290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:49.668 [2024-11-10 05:19:42.855298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:49.668 [2024-11-10 05:19:42.855306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:49.668 [2024-11-10 05:19:42.855313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:49.668 [2024-11-10 05:19:42.855321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:49.668 [2024-11-10 05:19:42.855329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:49.668 [2024-11-10 05:19:42.855336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:49.668 [2024-11-10 05:19:42.855345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:49.668 [2024-11-10 05:19:42.855352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:49.668 [2024-11-10 05:19:42.855360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:49.668 [2024-11-10 05:19:42.855367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:49.668 [2024-11-10 05:19:42.855375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:49.668 [2024-11-10 05:19:42.855382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:49.668 [2024-11-10 05:19:42.855390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:49.668 [2024-11-10 05:19:42.855397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:49.668 [2024-11-10 05:19:42.855406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:49.668 [2024-11-10 05:19:42.855414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:49.668 [2024-11-10 05:19:42.855422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:49.668 [2024-11-10 05:19:42.855430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:49.668 [2024-11-10 05:19:42.855438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:49.668 [2024-11-10 05:19:42.855446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:49.668 [2024-11-10 05:19:42.855454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:49.668 [2024-11-10 05:19:42.855462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:49.668 [2024-11-10 05:19:42.855469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:49.668 [2024-11-10 05:19:42.855477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:49.668 [2024-11-10 05:19:42.855485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:49.668 [2024-11-10 05:19:42.855492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:49.668 [2024-11-10 05:19:42.855500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:49.668 [2024-11-10 05:19:42.855507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:49.668 [2024-11-10 05:19:42.855515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:49.668 [2024-11-10 05:19:42.855522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:49.668 [2024-11-10 05:19:42.855530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:49.668 [2024-11-10 05:19:42.855537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:49.668 [2024-11-10 05:19:42.855545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:49.668 [2024-11-10 05:19:42.855552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:49.668 [2024-11-10 05:19:42.855560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:49.668 [2024-11-10 05:19:42.855567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:49.668 [2024-11-10 05:19:42.855574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:49.668 [2024-11-10 05:19:42.855581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:49.668 [2024-11-10 05:19:42.855589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:49.668 [2024-11-10 05:19:42.855596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:49.668 [2024-11-10 05:19:42.855604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:49.669 [2024-11-10 05:19:42.855611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:49.669 [2024-11-10 05:19:42.855618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:49.669 [2024-11-10 05:19:42.855626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:49.669 [2024-11-10 05:19:42.855634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:49.669 [2024-11-10 05:19:42.855642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:49.669 [2024-11-10 05:19:42.855660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:49.669 [2024-11-10 05:19:42.855674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:49.669 [2024-11-10 05:19:42.855683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:49.669 [2024-11-10 05:19:42.855692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:49.669 [2024-11-10 05:19:42.855700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:49.669 [2024-11-10 05:19:42.855708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:49.669 [2024-11-10 05:19:42.855717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:49.669 [2024-11-10 05:19:42.855728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:49.669 [2024-11-10 05:19:42.855736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:49.669 [2024-11-10 05:19:42.855744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:49.669 [2024-11-10 05:19:42.855752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:49.669 [2024-11-10 05:19:42.855760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:49.669 [2024-11-10 05:19:42.855769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:49.669 [2024-11-10 05:19:42.855777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:49.669 [2024-11-10 05:19:42.855786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:49.669 [2024-11-10 05:19:42.855794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:49.669 [2024-11-10 05:19:42.855802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:49.669 [2024-11-10 05:19:42.855810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:49.669 [2024-11-10 05:19:42.855818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:49.669 [2024-11-10 05:19:42.855827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:49.669 [2024-11-10 05:19:42.855836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:49.669 [2024-11-10 05:19:42.855844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:49.669 [2024-11-10 05:19:42.855852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:49.669 [2024-11-10 05:19:42.855860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:49.669 [2024-11-10 05:19:42.855869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:49.669 [2024-11-10 05:19:42.855878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:49.669 [2024-11-10 05:19:42.855885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:49.669 [2024-11-10 05:19:42.855893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:49.669 [2024-11-10 05:19:42.855908] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:49.669 [2024-11-10 05:19:42.855918] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: cc1e5a32-ec08-4380-949d-289ffcdd7e88 00:17:49.669 [2024-11-10 05:19:42.855946] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:49.669 [2024-11-10 05:19:42.855954] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:49.669 [2024-11-10 05:19:42.855965] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:49.669 [2024-11-10 05:19:42.855983] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:49.669 [2024-11-10 05:19:42.856016] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:49.669 [2024-11-10 05:19:42.856025] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:49.669 [2024-11-10 05:19:42.856034] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:49.669 [2024-11-10 05:19:42.856041] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:49.669 [2024-11-10 05:19:42.856049] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:49.669 [2024-11-10 05:19:42.856060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.669 [2024-11-10 05:19:42.856073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:49.669 [2024-11-10 05:19:42.856086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.000 ms 00:17:49.669 [2024-11-10 05:19:42.856094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.669 [2024-11-10 05:19:42.858400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.669 [2024-11-10 05:19:42.858440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:49.669 [2024-11-10 05:19:42.858454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.285 ms 00:17:49.669 [2024-11-10 05:19:42.858463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.669 [2024-11-10 05:19:42.858605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.669 [2024-11-10 05:19:42.858622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:49.669 [2024-11-10 05:19:42.858631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:17:49.669 [2024-11-10 05:19:42.858638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.669 [2024-11-10 05:19:42.866285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.669 [2024-11-10 05:19:42.866330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:49.669 [2024-11-10 05:19:42.866340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.669 [2024-11-10 05:19:42.866349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.669 [2024-11-10 05:19:42.866417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.669 [2024-11-10 05:19:42.866428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:49.669 [2024-11-10 05:19:42.866441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.669 [2024-11-10 05:19:42.866449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.669 [2024-11-10 05:19:42.866498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.669 [2024-11-10 05:19:42.866509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:49.669 [2024-11-10 05:19:42.866521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.669 [2024-11-10 05:19:42.866530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.669 [2024-11-10 05:19:42.866549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.669 [2024-11-10 05:19:42.866562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:49.669 [2024-11-10 05:19:42.866575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.669 [2024-11-10 05:19:42.866584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.669 [2024-11-10 05:19:42.879946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.669 [2024-11-10 05:19:42.881165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:49.669 [2024-11-10 05:19:42.881212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.669 [2024-11-10 05:19:42.881223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.669 [2024-11-10 05:19:42.891331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.669 [2024-11-10 05:19:42.891385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:49.669 [2024-11-10 05:19:42.891397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.669 [2024-11-10 05:19:42.891405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.669 [2024-11-10 05:19:42.891459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.669 [2024-11-10 05:19:42.891470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:49.669 [2024-11-10 05:19:42.891478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.669 [2024-11-10 05:19:42.891487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.669 [2024-11-10 05:19:42.891520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.669 [2024-11-10 05:19:42.891529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:49.669 [2024-11-10 05:19:42.891537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.669 [2024-11-10 05:19:42.891549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.669 [2024-11-10 05:19:42.891628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.669 [2024-11-10 05:19:42.891639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:49.669 [2024-11-10 05:19:42.891652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.669 [2024-11-10 05:19:42.891660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.669 [2024-11-10 05:19:42.891692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.669 [2024-11-10 05:19:42.891701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:49.669 [2024-11-10 05:19:42.891709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.669 [2024-11-10 05:19:42.891717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.670 [2024-11-10 05:19:42.891765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.670 [2024-11-10 05:19:42.891775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:49.670 [2024-11-10 05:19:42.891787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.670 [2024-11-10 05:19:42.891795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.670 [2024-11-10 05:19:42.891844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.670 [2024-11-10 05:19:42.891859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:49.670 [2024-11-10 05:19:42.891868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.670 [2024-11-10 05:19:42.891878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.670 [2024-11-10 05:19:42.892062] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 69.597 ms, result 0 00:17:49.931 00:17:49.931 00:17:50.192 05:19:43 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:17:50.765 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:17:50.765 05:19:43 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:17:50.765 05:19:43 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:17:50.765 05:19:43 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:17:50.765 05:19:43 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:50.765 05:19:43 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:17:50.765 05:19:43 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:50.765 05:19:43 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 85868 00:17:50.765 05:19:43 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 85868 ']' 00:17:50.765 05:19:43 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 85868 00:17:50.765 Process with pid 85868 is not found 00:17:50.765 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (85868) - No such process 00:17:50.765 05:19:43 ftl.ftl_trim -- common/autotest_common.sh@977 -- # echo 'Process with pid 85868 is not found' 00:17:50.765 ************************************ 00:17:50.765 END TEST ftl_trim 00:17:50.765 ************************************ 00:17:50.765 00:17:50.765 real 1m8.526s 00:17:50.765 user 1m31.760s 00:17:50.765 sys 0m4.985s 00:17:50.765 05:19:43 ftl.ftl_trim -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:50.765 05:19:43 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:50.765 05:19:43 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:17:50.765 05:19:43 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:17:50.765 05:19:43 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:50.765 05:19:43 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:50.765 ************************************ 00:17:50.765 START TEST ftl_restore 00:17:50.765 ************************************ 00:17:50.765 05:19:43 ftl.ftl_restore -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:17:50.765 * Looking for test storage... 00:17:50.765 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:50.765 05:19:43 ftl.ftl_restore -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:50.765 05:19:43 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:50.765 05:19:43 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # lcov --version 00:17:51.026 05:19:44 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:51.026 05:19:44 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:51.026 05:19:44 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:51.026 05:19:44 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:51.026 05:19:44 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:17:51.026 05:19:44 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:17:51.026 05:19:44 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:17:51.026 05:19:44 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:17:51.026 05:19:44 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:17:51.026 05:19:44 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:17:51.026 05:19:44 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:17:51.026 05:19:44 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:51.026 05:19:44 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:17:51.026 05:19:44 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:17:51.026 05:19:44 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:51.026 05:19:44 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:51.026 05:19:44 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:17:51.026 05:19:44 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:17:51.026 05:19:44 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:51.026 05:19:44 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:17:51.026 05:19:44 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:17:51.026 05:19:44 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:17:51.026 05:19:44 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:17:51.026 05:19:44 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:51.026 05:19:44 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:17:51.026 05:19:44 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:17:51.026 05:19:44 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:51.026 05:19:44 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:51.026 05:19:44 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:17:51.026 05:19:44 ftl.ftl_restore -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:51.026 05:19:44 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:51.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.026 --rc genhtml_branch_coverage=1 00:17:51.026 --rc genhtml_function_coverage=1 00:17:51.026 --rc genhtml_legend=1 00:17:51.026 --rc geninfo_all_blocks=1 00:17:51.026 --rc geninfo_unexecuted_blocks=1 00:17:51.026 00:17:51.026 ' 00:17:51.026 05:19:44 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:51.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.027 --rc genhtml_branch_coverage=1 00:17:51.027 --rc genhtml_function_coverage=1 00:17:51.027 --rc genhtml_legend=1 00:17:51.027 --rc geninfo_all_blocks=1 00:17:51.027 --rc geninfo_unexecuted_blocks=1 00:17:51.027 00:17:51.027 ' 00:17:51.027 05:19:44 ftl.ftl_restore -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:51.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.027 --rc genhtml_branch_coverage=1 00:17:51.027 --rc genhtml_function_coverage=1 00:17:51.027 --rc genhtml_legend=1 00:17:51.027 --rc geninfo_all_blocks=1 00:17:51.027 --rc geninfo_unexecuted_blocks=1 00:17:51.027 00:17:51.027 ' 00:17:51.027 05:19:44 ftl.ftl_restore -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:51.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.027 --rc genhtml_branch_coverage=1 00:17:51.027 --rc genhtml_function_coverage=1 00:17:51.027 --rc genhtml_legend=1 00:17:51.027 --rc geninfo_all_blocks=1 00:17:51.027 --rc geninfo_unexecuted_blocks=1 00:17:51.027 00:17:51.027 ' 00:17:51.027 05:19:44 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:51.027 05:19:44 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:17:51.027 05:19:44 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:51.027 05:19:44 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:51.027 05:19:44 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:51.027 05:19:44 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:51.027 05:19:44 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:51.027 05:19:44 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:51.027 05:19:44 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:51.027 05:19:44 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:51.027 05:19:44 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:51.027 05:19:44 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:51.027 05:19:44 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:51.027 05:19:44 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:51.027 05:19:44 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:51.027 05:19:44 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:51.027 05:19:44 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:51.027 05:19:44 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:51.027 05:19:44 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:51.027 05:19:44 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:51.027 05:19:44 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:51.027 05:19:44 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:51.027 05:19:44 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:51.027 05:19:44 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:51.027 05:19:44 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:51.027 05:19:44 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:51.027 05:19:44 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:51.027 05:19:44 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:51.027 05:19:44 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:51.027 05:19:44 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:51.027 05:19:44 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:17:51.027 05:19:44 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.0wKHj6NTJu 00:17:51.027 05:19:44 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:17:51.027 05:19:44 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:17:51.027 05:19:44 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:17:51.027 05:19:44 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:17:51.027 05:19:44 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:17:51.027 05:19:44 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:17:51.027 05:19:44 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:17:51.027 05:19:44 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:17:51.027 05:19:44 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=86173 00:17:51.027 05:19:44 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 86173 00:17:51.027 05:19:44 ftl.ftl_restore -- common/autotest_common.sh@831 -- # '[' -z 86173 ']' 00:17:51.027 05:19:44 ftl.ftl_restore -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:51.027 05:19:44 ftl.ftl_restore -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:51.027 05:19:44 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:51.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:51.027 05:19:44 ftl.ftl_restore -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:51.027 05:19:44 ftl.ftl_restore -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:51.027 05:19:44 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:17:51.027 [2024-11-10 05:19:44.147183] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:51.027 [2024-11-10 05:19:44.147328] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86173 ] 00:17:51.287 [2024-11-10 05:19:44.298118] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:51.287 [2024-11-10 05:19:44.348431] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:51.860 05:19:45 ftl.ftl_restore -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:51.860 05:19:45 ftl.ftl_restore -- common/autotest_common.sh@864 -- # return 0 00:17:51.860 05:19:45 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:51.860 05:19:45 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:17:51.860 05:19:45 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:51.860 05:19:45 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:17:51.860 05:19:45 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:17:51.860 05:19:45 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:52.122 05:19:45 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:52.122 05:19:45 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:17:52.122 05:19:45 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:52.122 05:19:45 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:17:52.122 05:19:45 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:52.122 05:19:45 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:17:52.122 05:19:45 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:17:52.122 05:19:45 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:52.383 05:19:45 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:52.383 { 00:17:52.383 "name": "nvme0n1", 00:17:52.383 "aliases": [ 00:17:52.383 "7ff3b995-fa13-49ba-b5ab-138ea292aaa3" 00:17:52.383 ], 00:17:52.383 "product_name": "NVMe disk", 00:17:52.383 "block_size": 4096, 00:17:52.383 "num_blocks": 1310720, 00:17:52.383 "uuid": "7ff3b995-fa13-49ba-b5ab-138ea292aaa3", 00:17:52.383 "numa_id": -1, 00:17:52.383 "assigned_rate_limits": { 00:17:52.383 "rw_ios_per_sec": 0, 00:17:52.383 "rw_mbytes_per_sec": 0, 00:17:52.383 "r_mbytes_per_sec": 0, 00:17:52.383 "w_mbytes_per_sec": 0 00:17:52.383 }, 00:17:52.383 "claimed": true, 00:17:52.383 "claim_type": "read_many_write_one", 00:17:52.383 "zoned": false, 00:17:52.383 "supported_io_types": { 00:17:52.383 "read": true, 00:17:52.383 "write": true, 00:17:52.383 "unmap": true, 00:17:52.383 "flush": true, 00:17:52.383 "reset": true, 00:17:52.383 "nvme_admin": true, 00:17:52.383 "nvme_io": true, 00:17:52.383 "nvme_io_md": false, 00:17:52.383 "write_zeroes": true, 00:17:52.383 "zcopy": false, 00:17:52.383 "get_zone_info": false, 00:17:52.383 "zone_management": false, 00:17:52.383 "zone_append": false, 00:17:52.383 "compare": true, 00:17:52.383 "compare_and_write": false, 00:17:52.383 "abort": true, 00:17:52.383 "seek_hole": false, 00:17:52.383 "seek_data": false, 00:17:52.383 "copy": true, 00:17:52.383 "nvme_iov_md": false 00:17:52.383 }, 00:17:52.383 "driver_specific": { 00:17:52.383 "nvme": [ 00:17:52.383 { 00:17:52.383 "pci_address": "0000:00:11.0", 00:17:52.383 "trid": { 00:17:52.383 "trtype": "PCIe", 00:17:52.383 "traddr": "0000:00:11.0" 00:17:52.383 }, 00:17:52.383 "ctrlr_data": { 00:17:52.383 "cntlid": 0, 00:17:52.383 "vendor_id": "0x1b36", 00:17:52.383 "model_number": "QEMU NVMe Ctrl", 00:17:52.383 "serial_number": "12341", 00:17:52.383 "firmware_revision": "8.0.0", 00:17:52.383 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:52.383 "oacs": { 00:17:52.383 "security": 0, 00:17:52.383 "format": 1, 00:17:52.383 "firmware": 0, 00:17:52.383 "ns_manage": 1 00:17:52.383 }, 00:17:52.383 "multi_ctrlr": false, 00:17:52.383 "ana_reporting": false 00:17:52.383 }, 00:17:52.383 "vs": { 00:17:52.383 "nvme_version": "1.4" 00:17:52.383 }, 00:17:52.383 "ns_data": { 00:17:52.383 "id": 1, 00:17:52.383 "can_share": false 00:17:52.383 } 00:17:52.383 } 00:17:52.383 ], 00:17:52.383 "mp_policy": "active_passive" 00:17:52.383 } 00:17:52.383 } 00:17:52.383 ]' 00:17:52.383 05:19:45 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:52.383 05:19:45 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:17:52.383 05:19:45 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:52.648 05:19:45 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=1310720 00:17:52.648 05:19:45 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:17:52.648 05:19:45 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 5120 00:17:52.648 05:19:45 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:17:52.648 05:19:45 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:52.648 05:19:45 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:17:52.648 05:19:45 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:52.648 05:19:45 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:52.648 05:19:45 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=3fbbe496-612e-42e3-a309-c94907fc4c77 00:17:52.648 05:19:45 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:17:52.648 05:19:45 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3fbbe496-612e-42e3-a309-c94907fc4c77 00:17:52.941 05:19:46 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:53.203 05:19:46 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=a72ff7d2-715c-4eaa-b7c3-c0e4fae23ed3 00:17:53.203 05:19:46 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u a72ff7d2-715c-4eaa-b7c3-c0e4fae23ed3 00:17:53.467 05:19:46 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=d62d8f37-6b09-4041-9440-003290796530 00:17:53.467 05:19:46 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:17:53.467 05:19:46 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 d62d8f37-6b09-4041-9440-003290796530 00:17:53.467 05:19:46 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:17:53.467 05:19:46 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:53.467 05:19:46 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=d62d8f37-6b09-4041-9440-003290796530 00:17:53.467 05:19:46 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:17:53.467 05:19:46 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size d62d8f37-6b09-4041-9440-003290796530 00:17:53.467 05:19:46 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=d62d8f37-6b09-4041-9440-003290796530 00:17:53.467 05:19:46 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:53.467 05:19:46 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:17:53.467 05:19:46 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:17:53.467 05:19:46 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d62d8f37-6b09-4041-9440-003290796530 00:17:53.728 05:19:46 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:53.728 { 00:17:53.728 "name": "d62d8f37-6b09-4041-9440-003290796530", 00:17:53.728 "aliases": [ 00:17:53.728 "lvs/nvme0n1p0" 00:17:53.728 ], 00:17:53.728 "product_name": "Logical Volume", 00:17:53.728 "block_size": 4096, 00:17:53.728 "num_blocks": 26476544, 00:17:53.728 "uuid": "d62d8f37-6b09-4041-9440-003290796530", 00:17:53.728 "assigned_rate_limits": { 00:17:53.728 "rw_ios_per_sec": 0, 00:17:53.728 "rw_mbytes_per_sec": 0, 00:17:53.729 "r_mbytes_per_sec": 0, 00:17:53.729 "w_mbytes_per_sec": 0 00:17:53.729 }, 00:17:53.729 "claimed": false, 00:17:53.729 "zoned": false, 00:17:53.729 "supported_io_types": { 00:17:53.729 "read": true, 00:17:53.729 "write": true, 00:17:53.729 "unmap": true, 00:17:53.729 "flush": false, 00:17:53.729 "reset": true, 00:17:53.729 "nvme_admin": false, 00:17:53.729 "nvme_io": false, 00:17:53.729 "nvme_io_md": false, 00:17:53.729 "write_zeroes": true, 00:17:53.729 "zcopy": false, 00:17:53.729 "get_zone_info": false, 00:17:53.729 "zone_management": false, 00:17:53.729 "zone_append": false, 00:17:53.729 "compare": false, 00:17:53.729 "compare_and_write": false, 00:17:53.729 "abort": false, 00:17:53.729 "seek_hole": true, 00:17:53.729 "seek_data": true, 00:17:53.729 "copy": false, 00:17:53.729 "nvme_iov_md": false 00:17:53.729 }, 00:17:53.729 "driver_specific": { 00:17:53.729 "lvol": { 00:17:53.729 "lvol_store_uuid": "a72ff7d2-715c-4eaa-b7c3-c0e4fae23ed3", 00:17:53.729 "base_bdev": "nvme0n1", 00:17:53.729 "thin_provision": true, 00:17:53.729 "num_allocated_clusters": 0, 00:17:53.729 "snapshot": false, 00:17:53.729 "clone": false, 00:17:53.729 "esnap_clone": false 00:17:53.729 } 00:17:53.729 } 00:17:53.729 } 00:17:53.729 ]' 00:17:53.729 05:19:46 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:53.729 05:19:46 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:17:53.729 05:19:46 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:53.729 05:19:46 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:53.729 05:19:46 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:53.729 05:19:46 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:17:53.729 05:19:46 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:17:53.729 05:19:46 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:17:53.729 05:19:46 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:53.989 05:19:47 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:53.989 05:19:47 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:53.989 05:19:47 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size d62d8f37-6b09-4041-9440-003290796530 00:17:53.989 05:19:47 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=d62d8f37-6b09-4041-9440-003290796530 00:17:53.989 05:19:47 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:53.989 05:19:47 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:17:53.989 05:19:47 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:17:53.989 05:19:47 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d62d8f37-6b09-4041-9440-003290796530 00:17:54.250 05:19:47 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:54.250 { 00:17:54.250 "name": "d62d8f37-6b09-4041-9440-003290796530", 00:17:54.250 "aliases": [ 00:17:54.250 "lvs/nvme0n1p0" 00:17:54.250 ], 00:17:54.250 "product_name": "Logical Volume", 00:17:54.250 "block_size": 4096, 00:17:54.250 "num_blocks": 26476544, 00:17:54.250 "uuid": "d62d8f37-6b09-4041-9440-003290796530", 00:17:54.250 "assigned_rate_limits": { 00:17:54.250 "rw_ios_per_sec": 0, 00:17:54.250 "rw_mbytes_per_sec": 0, 00:17:54.250 "r_mbytes_per_sec": 0, 00:17:54.250 "w_mbytes_per_sec": 0 00:17:54.250 }, 00:17:54.250 "claimed": false, 00:17:54.250 "zoned": false, 00:17:54.250 "supported_io_types": { 00:17:54.250 "read": true, 00:17:54.250 "write": true, 00:17:54.250 "unmap": true, 00:17:54.250 "flush": false, 00:17:54.250 "reset": true, 00:17:54.250 "nvme_admin": false, 00:17:54.250 "nvme_io": false, 00:17:54.250 "nvme_io_md": false, 00:17:54.250 "write_zeroes": true, 00:17:54.250 "zcopy": false, 00:17:54.250 "get_zone_info": false, 00:17:54.250 "zone_management": false, 00:17:54.250 "zone_append": false, 00:17:54.250 "compare": false, 00:17:54.250 "compare_and_write": false, 00:17:54.250 "abort": false, 00:17:54.250 "seek_hole": true, 00:17:54.250 "seek_data": true, 00:17:54.250 "copy": false, 00:17:54.250 "nvme_iov_md": false 00:17:54.250 }, 00:17:54.250 "driver_specific": { 00:17:54.250 "lvol": { 00:17:54.250 "lvol_store_uuid": "a72ff7d2-715c-4eaa-b7c3-c0e4fae23ed3", 00:17:54.250 "base_bdev": "nvme0n1", 00:17:54.250 "thin_provision": true, 00:17:54.250 "num_allocated_clusters": 0, 00:17:54.250 "snapshot": false, 00:17:54.250 "clone": false, 00:17:54.250 "esnap_clone": false 00:17:54.250 } 00:17:54.250 } 00:17:54.250 } 00:17:54.250 ]' 00:17:54.250 05:19:47 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:54.250 05:19:47 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:17:54.250 05:19:47 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:54.250 05:19:47 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:54.250 05:19:47 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:54.250 05:19:47 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:17:54.250 05:19:47 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:17:54.250 05:19:47 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:54.511 05:19:47 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:17:54.511 05:19:47 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size d62d8f37-6b09-4041-9440-003290796530 00:17:54.511 05:19:47 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=d62d8f37-6b09-4041-9440-003290796530 00:17:54.511 05:19:47 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:54.511 05:19:47 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:17:54.511 05:19:47 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:17:54.511 05:19:47 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d62d8f37-6b09-4041-9440-003290796530 00:17:54.773 05:19:47 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:54.773 { 00:17:54.773 "name": "d62d8f37-6b09-4041-9440-003290796530", 00:17:54.773 "aliases": [ 00:17:54.773 "lvs/nvme0n1p0" 00:17:54.773 ], 00:17:54.773 "product_name": "Logical Volume", 00:17:54.773 "block_size": 4096, 00:17:54.773 "num_blocks": 26476544, 00:17:54.773 "uuid": "d62d8f37-6b09-4041-9440-003290796530", 00:17:54.773 "assigned_rate_limits": { 00:17:54.773 "rw_ios_per_sec": 0, 00:17:54.773 "rw_mbytes_per_sec": 0, 00:17:54.773 "r_mbytes_per_sec": 0, 00:17:54.773 "w_mbytes_per_sec": 0 00:17:54.773 }, 00:17:54.773 "claimed": false, 00:17:54.773 "zoned": false, 00:17:54.773 "supported_io_types": { 00:17:54.773 "read": true, 00:17:54.773 "write": true, 00:17:54.773 "unmap": true, 00:17:54.773 "flush": false, 00:17:54.773 "reset": true, 00:17:54.773 "nvme_admin": false, 00:17:54.773 "nvme_io": false, 00:17:54.773 "nvme_io_md": false, 00:17:54.773 "write_zeroes": true, 00:17:54.773 "zcopy": false, 00:17:54.773 "get_zone_info": false, 00:17:54.773 "zone_management": false, 00:17:54.773 "zone_append": false, 00:17:54.773 "compare": false, 00:17:54.773 "compare_and_write": false, 00:17:54.773 "abort": false, 00:17:54.773 "seek_hole": true, 00:17:54.773 "seek_data": true, 00:17:54.773 "copy": false, 00:17:54.773 "nvme_iov_md": false 00:17:54.773 }, 00:17:54.773 "driver_specific": { 00:17:54.773 "lvol": { 00:17:54.773 "lvol_store_uuid": "a72ff7d2-715c-4eaa-b7c3-c0e4fae23ed3", 00:17:54.773 "base_bdev": "nvme0n1", 00:17:54.773 "thin_provision": true, 00:17:54.773 "num_allocated_clusters": 0, 00:17:54.773 "snapshot": false, 00:17:54.773 "clone": false, 00:17:54.773 "esnap_clone": false 00:17:54.773 } 00:17:54.773 } 00:17:54.773 } 00:17:54.773 ]' 00:17:54.773 05:19:47 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:54.773 05:19:47 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:17:54.773 05:19:47 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:54.773 05:19:47 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:54.773 05:19:47 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:54.773 05:19:47 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:17:54.773 05:19:47 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:17:54.773 05:19:47 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d d62d8f37-6b09-4041-9440-003290796530 --l2p_dram_limit 10' 00:17:54.773 05:19:47 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:17:54.773 05:19:47 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:17:54.773 05:19:47 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:17:54.773 05:19:47 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:17:54.773 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:17:54.773 05:19:47 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d d62d8f37-6b09-4041-9440-003290796530 --l2p_dram_limit 10 -c nvc0n1p0 00:17:55.035 [2024-11-10 05:19:48.069251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.035 [2024-11-10 05:19:48.069382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:55.035 [2024-11-10 05:19:48.069399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:55.035 [2024-11-10 05:19:48.069408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.035 [2024-11-10 05:19:48.069458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.035 [2024-11-10 05:19:48.069468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:55.035 [2024-11-10 05:19:48.069475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:17:55.035 [2024-11-10 05:19:48.069484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.035 [2024-11-10 05:19:48.069505] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:55.035 [2024-11-10 05:19:48.069714] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:55.035 [2024-11-10 05:19:48.069726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.035 [2024-11-10 05:19:48.069734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:55.035 [2024-11-10 05:19:48.069742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.228 ms 00:17:55.035 [2024-11-10 05:19:48.069749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.035 [2024-11-10 05:19:48.069798] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 009e53e2-83fe-4f5e-b934-8f419491318d 00:17:55.035 [2024-11-10 05:19:48.070773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.035 [2024-11-10 05:19:48.070795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:55.035 [2024-11-10 05:19:48.070804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:17:55.035 [2024-11-10 05:19:48.070810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.035 [2024-11-10 05:19:48.075532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.035 [2024-11-10 05:19:48.075558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:55.035 [2024-11-10 05:19:48.075568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.687 ms 00:17:55.035 [2024-11-10 05:19:48.075573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.035 [2024-11-10 05:19:48.075632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.035 [2024-11-10 05:19:48.075638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:55.035 [2024-11-10 05:19:48.075646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:17:55.035 [2024-11-10 05:19:48.075654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.035 [2024-11-10 05:19:48.075691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.035 [2024-11-10 05:19:48.075698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:55.035 [2024-11-10 05:19:48.075706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:55.035 [2024-11-10 05:19:48.075711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.035 [2024-11-10 05:19:48.075730] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:55.035 [2024-11-10 05:19:48.077045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.035 [2024-11-10 05:19:48.077070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:55.035 [2024-11-10 05:19:48.077079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.322 ms 00:17:55.035 [2024-11-10 05:19:48.077086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.035 [2024-11-10 05:19:48.077110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.035 [2024-11-10 05:19:48.077118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:55.035 [2024-11-10 05:19:48.077125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:55.035 [2024-11-10 05:19:48.077133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.035 [2024-11-10 05:19:48.077146] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:55.035 [2024-11-10 05:19:48.077257] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:55.035 [2024-11-10 05:19:48.077267] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:55.036 [2024-11-10 05:19:48.077277] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:55.036 [2024-11-10 05:19:48.077285] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:55.036 [2024-11-10 05:19:48.077293] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:55.036 [2024-11-10 05:19:48.077299] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:55.036 [2024-11-10 05:19:48.077309] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:55.036 [2024-11-10 05:19:48.077314] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:55.036 [2024-11-10 05:19:48.077325] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:55.036 [2024-11-10 05:19:48.077332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.036 [2024-11-10 05:19:48.077339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:55.036 [2024-11-10 05:19:48.077345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.187 ms 00:17:55.036 [2024-11-10 05:19:48.077352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.036 [2024-11-10 05:19:48.077416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.036 [2024-11-10 05:19:48.077427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:55.036 [2024-11-10 05:19:48.077433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:17:55.036 [2024-11-10 05:19:48.077440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.036 [2024-11-10 05:19:48.077513] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:55.036 [2024-11-10 05:19:48.077525] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:55.036 [2024-11-10 05:19:48.077532] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:55.036 [2024-11-10 05:19:48.077539] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:55.036 [2024-11-10 05:19:48.077545] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:55.036 [2024-11-10 05:19:48.077551] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:55.036 [2024-11-10 05:19:48.077556] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:55.036 [2024-11-10 05:19:48.077564] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:55.036 [2024-11-10 05:19:48.077569] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:55.036 [2024-11-10 05:19:48.077575] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:55.036 [2024-11-10 05:19:48.077580] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:55.036 [2024-11-10 05:19:48.077587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:55.036 [2024-11-10 05:19:48.077592] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:55.036 [2024-11-10 05:19:48.077600] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:55.036 [2024-11-10 05:19:48.077605] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:17:55.036 [2024-11-10 05:19:48.077611] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:55.036 [2024-11-10 05:19:48.077616] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:55.036 [2024-11-10 05:19:48.077623] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:17:55.036 [2024-11-10 05:19:48.077630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:55.036 [2024-11-10 05:19:48.077637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:55.036 [2024-11-10 05:19:48.077642] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:55.036 [2024-11-10 05:19:48.077648] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:55.036 [2024-11-10 05:19:48.077653] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:55.036 [2024-11-10 05:19:48.077660] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:55.036 [2024-11-10 05:19:48.077665] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:55.036 [2024-11-10 05:19:48.077671] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:55.036 [2024-11-10 05:19:48.077676] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:55.036 [2024-11-10 05:19:48.077682] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:55.036 [2024-11-10 05:19:48.077687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:55.036 [2024-11-10 05:19:48.077696] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:17:55.036 [2024-11-10 05:19:48.077702] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:55.036 [2024-11-10 05:19:48.077709] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:55.036 [2024-11-10 05:19:48.077715] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:17:55.036 [2024-11-10 05:19:48.077723] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:55.036 [2024-11-10 05:19:48.077729] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:55.036 [2024-11-10 05:19:48.077736] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:17:55.036 [2024-11-10 05:19:48.077742] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:55.036 [2024-11-10 05:19:48.077749] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:55.036 [2024-11-10 05:19:48.077754] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:17:55.036 [2024-11-10 05:19:48.077761] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:55.036 [2024-11-10 05:19:48.077767] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:55.036 [2024-11-10 05:19:48.077774] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:17:55.036 [2024-11-10 05:19:48.077779] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:55.036 [2024-11-10 05:19:48.077786] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:55.036 [2024-11-10 05:19:48.077793] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:55.036 [2024-11-10 05:19:48.077802] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:55.036 [2024-11-10 05:19:48.077808] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:55.036 [2024-11-10 05:19:48.077818] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:55.036 [2024-11-10 05:19:48.077824] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:55.036 [2024-11-10 05:19:48.077831] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:55.036 [2024-11-10 05:19:48.077838] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:55.036 [2024-11-10 05:19:48.077845] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:55.036 [2024-11-10 05:19:48.077851] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:55.036 [2024-11-10 05:19:48.077861] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:55.036 [2024-11-10 05:19:48.077871] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:55.036 [2024-11-10 05:19:48.077880] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:55.036 [2024-11-10 05:19:48.077886] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:17:55.036 [2024-11-10 05:19:48.077894] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:17:55.036 [2024-11-10 05:19:48.077900] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:17:55.036 [2024-11-10 05:19:48.077908] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:17:55.036 [2024-11-10 05:19:48.077914] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:17:55.036 [2024-11-10 05:19:48.077923] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:17:55.036 [2024-11-10 05:19:48.077929] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:17:55.036 [2024-11-10 05:19:48.077936] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:17:55.036 [2024-11-10 05:19:48.077942] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:17:55.036 [2024-11-10 05:19:48.077949] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:17:55.036 [2024-11-10 05:19:48.077956] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:17:55.036 [2024-11-10 05:19:48.077963] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:17:55.036 [2024-11-10 05:19:48.077970] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:17:55.036 [2024-11-10 05:19:48.077977] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:55.036 [2024-11-10 05:19:48.077986] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:55.036 [2024-11-10 05:19:48.078006] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:55.036 [2024-11-10 05:19:48.078012] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:55.036 [2024-11-10 05:19:48.078020] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:55.036 [2024-11-10 05:19:48.078027] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:55.036 [2024-11-10 05:19:48.078034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.036 [2024-11-10 05:19:48.078041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:55.036 [2024-11-10 05:19:48.078052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.570 ms 00:17:55.036 [2024-11-10 05:19:48.078058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.036 [2024-11-10 05:19:48.078088] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:55.036 [2024-11-10 05:19:48.078097] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:59.247 [2024-11-10 05:19:51.931155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.247 [2024-11-10 05:19:51.931248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:59.247 [2024-11-10 05:19:51.931273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3853.044 ms 00:17:59.247 [2024-11-10 05:19:51.931283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.247 [2024-11-10 05:19:51.944817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.247 [2024-11-10 05:19:51.945090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:59.247 [2024-11-10 05:19:51.945121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.410 ms 00:17:59.247 [2024-11-10 05:19:51.945130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.247 [2024-11-10 05:19:51.945248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.247 [2024-11-10 05:19:51.945258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:59.247 [2024-11-10 05:19:51.945274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:17:59.247 [2024-11-10 05:19:51.945282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.247 [2024-11-10 05:19:51.957115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.247 [2024-11-10 05:19:51.957166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:59.247 [2024-11-10 05:19:51.957181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.779 ms 00:17:59.247 [2024-11-10 05:19:51.957190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.247 [2024-11-10 05:19:51.957225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.247 [2024-11-10 05:19:51.957237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:59.247 [2024-11-10 05:19:51.957248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:59.247 [2024-11-10 05:19:51.957256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.247 [2024-11-10 05:19:51.957805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.247 [2024-11-10 05:19:51.957844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:59.247 [2024-11-10 05:19:51.957857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.494 ms 00:17:59.247 [2024-11-10 05:19:51.957869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.247 [2024-11-10 05:19:51.958025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.247 [2024-11-10 05:19:51.958041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:59.247 [2024-11-10 05:19:51.958057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:17:59.247 [2024-11-10 05:19:51.958066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.247 [2024-11-10 05:19:51.973809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.247 [2024-11-10 05:19:51.974054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:59.247 [2024-11-10 05:19:51.974087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.712 ms 00:17:59.247 [2024-11-10 05:19:51.974099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.247 [2024-11-10 05:19:51.986416] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:17:59.247 [2024-11-10 05:19:51.990241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.247 [2024-11-10 05:19:51.990293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:59.247 [2024-11-10 05:19:51.990306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.017 ms 00:17:59.247 [2024-11-10 05:19:51.990317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.247 [2024-11-10 05:19:52.076595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.247 [2024-11-10 05:19:52.076691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:59.247 [2024-11-10 05:19:52.076715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.237 ms 00:17:59.247 [2024-11-10 05:19:52.076742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.247 [2024-11-10 05:19:52.077051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.247 [2024-11-10 05:19:52.077075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:59.247 [2024-11-10 05:19:52.077091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.236 ms 00:17:59.247 [2024-11-10 05:19:52.077127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.247 [2024-11-10 05:19:52.083391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.247 [2024-11-10 05:19:52.083470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:59.247 [2024-11-10 05:19:52.083490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.228 ms 00:17:59.247 [2024-11-10 05:19:52.083507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.247 [2024-11-10 05:19:52.088657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.247 [2024-11-10 05:19:52.088726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:59.247 [2024-11-10 05:19:52.088743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.081 ms 00:17:59.247 [2024-11-10 05:19:52.088758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.247 [2024-11-10 05:19:52.089294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.247 [2024-11-10 05:19:52.089327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:59.247 [2024-11-10 05:19:52.089351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.475 ms 00:17:59.247 [2024-11-10 05:19:52.089381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.247 [2024-11-10 05:19:52.132761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.247 [2024-11-10 05:19:52.132847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:59.247 [2024-11-10 05:19:52.132865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.326 ms 00:17:59.247 [2024-11-10 05:19:52.132881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.247 [2024-11-10 05:19:52.140457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.247 [2024-11-10 05:19:52.140531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:59.247 [2024-11-10 05:19:52.140547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.491 ms 00:17:59.247 [2024-11-10 05:19:52.140561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.247 [2024-11-10 05:19:52.146870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.247 [2024-11-10 05:19:52.146941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:59.247 [2024-11-10 05:19:52.146956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.246 ms 00:17:59.247 [2024-11-10 05:19:52.146972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.247 [2024-11-10 05:19:52.153260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.247 [2024-11-10 05:19:52.153332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:59.247 [2024-11-10 05:19:52.153348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.202 ms 00:17:59.247 [2024-11-10 05:19:52.153364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.247 [2024-11-10 05:19:52.153435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.247 [2024-11-10 05:19:52.153455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:59.247 [2024-11-10 05:19:52.153472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:59.247 [2024-11-10 05:19:52.153488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.247 [2024-11-10 05:19:52.153624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.247 [2024-11-10 05:19:52.153646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:59.247 [2024-11-10 05:19:52.153661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:17:59.247 [2024-11-10 05:19:52.153693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.247 [2024-11-10 05:19:52.155173] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4085.235 ms, result 0 00:17:59.247 { 00:17:59.247 "name": "ftl0", 00:17:59.247 "uuid": "009e53e2-83fe-4f5e-b934-8f419491318d" 00:17:59.248 } 00:17:59.248 05:19:52 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:17:59.248 05:19:52 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:59.248 05:19:52 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:17:59.248 05:19:52 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:59.513 [2024-11-10 05:19:52.590066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.513 [2024-11-10 05:19:52.590272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:59.513 [2024-11-10 05:19:52.590300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:59.513 [2024-11-10 05:19:52.590310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.513 [2024-11-10 05:19:52.590346] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:59.513 [2024-11-10 05:19:52.591088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.513 [2024-11-10 05:19:52.591127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:59.513 [2024-11-10 05:19:52.591139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.725 ms 00:17:59.513 [2024-11-10 05:19:52.591151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.513 [2024-11-10 05:19:52.591427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.513 [2024-11-10 05:19:52.591448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:59.513 [2024-11-10 05:19:52.591460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.242 ms 00:17:59.513 [2024-11-10 05:19:52.591471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.513 [2024-11-10 05:19:52.594724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.513 [2024-11-10 05:19:52.594754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:59.513 [2024-11-10 05:19:52.594765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.237 ms 00:17:59.513 [2024-11-10 05:19:52.594775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.513 [2024-11-10 05:19:52.601211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.513 [2024-11-10 05:19:52.601377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:59.513 [2024-11-10 05:19:52.601403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.418 ms 00:17:59.513 [2024-11-10 05:19:52.601413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.513 [2024-11-10 05:19:52.604454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.513 [2024-11-10 05:19:52.604615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:59.513 [2024-11-10 05:19:52.604632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.952 ms 00:17:59.513 [2024-11-10 05:19:52.604643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.513 [2024-11-10 05:19:52.610766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.513 [2024-11-10 05:19:52.610824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:59.513 [2024-11-10 05:19:52.610841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.082 ms 00:17:59.513 [2024-11-10 05:19:52.610852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.513 [2024-11-10 05:19:52.610986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.513 [2024-11-10 05:19:52.611029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:59.513 [2024-11-10 05:19:52.611040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:17:59.513 [2024-11-10 05:19:52.611051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.513 [2024-11-10 05:19:52.614496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.513 [2024-11-10 05:19:52.614669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:59.513 [2024-11-10 05:19:52.614686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.422 ms 00:17:59.513 [2024-11-10 05:19:52.614696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.513 [2024-11-10 05:19:52.616562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.513 [2024-11-10 05:19:52.616616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:59.513 [2024-11-10 05:19:52.616625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.827 ms 00:17:59.513 [2024-11-10 05:19:52.616635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.513 [2024-11-10 05:19:52.618166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.513 [2024-11-10 05:19:52.618215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:59.513 [2024-11-10 05:19:52.618224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.487 ms 00:17:59.513 [2024-11-10 05:19:52.618233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.513 [2024-11-10 05:19:52.620155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.513 [2024-11-10 05:19:52.620208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:59.513 [2024-11-10 05:19:52.620218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.846 ms 00:17:59.513 [2024-11-10 05:19:52.620228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.513 [2024-11-10 05:19:52.620270] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:59.513 [2024-11-10 05:19:52.620289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:59.513 [2024-11-10 05:19:52.620300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:59.513 [2024-11-10 05:19:52.620310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:59.513 [2024-11-10 05:19:52.620318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:59.513 [2024-11-10 05:19:52.620331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:59.513 [2024-11-10 05:19:52.620339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:59.513 [2024-11-10 05:19:52.620348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:59.513 [2024-11-10 05:19:52.620358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:59.514 [2024-11-10 05:19:52.620368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:59.514 [2024-11-10 05:19:52.620376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:59.514 [2024-11-10 05:19:52.620386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:59.514 [2024-11-10 05:19:52.620393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:59.514 [2024-11-10 05:19:52.620403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:59.514 [2024-11-10 05:19:52.620411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:59.514 [2024-11-10 05:19:52.620421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:59.514 [2024-11-10 05:19:52.620429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:59.514 [2024-11-10 05:19:52.620439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:59.514 [2024-11-10 05:19:52.620447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:59.514 [2024-11-10 05:19:52.620456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:59.514 [2024-11-10 05:19:52.620464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:59.514 [2024-11-10 05:19:52.620478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:59.514 [2024-11-10 05:19:52.620486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:59.514 [2024-11-10 05:19:52.620495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:59.514 [2024-11-10 05:19:52.620503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:59.514 [2024-11-10 05:19:52.620512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:59.514 [2024-11-10 05:19:52.620519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:59.514 [2024-11-10 05:19:52.620530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:59.514 [2024-11-10 05:19:52.620538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:59.514 [2024-11-10 05:19:52.620548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:59.514 [2024-11-10 05:19:52.620555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:59.514 [2024-11-10 05:19:52.620564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:59.514 [2024-11-10 05:19:52.620572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:59.514 [2024-11-10 05:19:52.620582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:59.514 [2024-11-10 05:19:52.620589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:59.514 [2024-11-10 05:19:52.620599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:59.514 [2024-11-10 05:19:52.620608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:59.514 [2024-11-10 05:19:52.620620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:59.514 [2024-11-10 05:19:52.620628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:59.514 [2024-11-10 05:19:52.620638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:59.514 [2024-11-10 05:19:52.620646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:59.514 [2024-11-10 05:19:52.620655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:59.514 [2024-11-10 05:19:52.620664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:59.514 [2024-11-10 05:19:52.620674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:59.514 [2024-11-10 05:19:52.620681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:59.514 [2024-11-10 05:19:52.620691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:59.514 [2024-11-10 05:19:52.620698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:59.514 [2024-11-10 05:19:52.620708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:59.514 [2024-11-10 05:19:52.620716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:59.514 [2024-11-10 05:19:52.620726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:59.514 [2024-11-10 05:19:52.620734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:59.514 [2024-11-10 05:19:52.620744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:59.514 [2024-11-10 05:19:52.620752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:59.514 [2024-11-10 05:19:52.620764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:59.514 [2024-11-10 05:19:52.620771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:59.514 [2024-11-10 05:19:52.620781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:59.514 [2024-11-10 05:19:52.620789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:59.514 [2024-11-10 05:19:52.620798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:59.514 [2024-11-10 05:19:52.620805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:59.514 [2024-11-10 05:19:52.620815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:59.514 [2024-11-10 05:19:52.620822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:59.514 [2024-11-10 05:19:52.620832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:59.514 [2024-11-10 05:19:52.620839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:59.514 [2024-11-10 05:19:52.620848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:59.514 [2024-11-10 05:19:52.620855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:59.514 [2024-11-10 05:19:52.620874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:59.514 [2024-11-10 05:19:52.620881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:59.514 [2024-11-10 05:19:52.620890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:59.514 [2024-11-10 05:19:52.620899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:59.514 [2024-11-10 05:19:52.620911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:59.514 [2024-11-10 05:19:52.620919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:59.514 [2024-11-10 05:19:52.620929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:59.514 [2024-11-10 05:19:52.620936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:59.514 [2024-11-10 05:19:52.620947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:59.514 [2024-11-10 05:19:52.620955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:59.514 [2024-11-10 05:19:52.620964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:59.514 [2024-11-10 05:19:52.620972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:59.514 [2024-11-10 05:19:52.620982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:59.514 [2024-11-10 05:19:52.621013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:59.514 [2024-11-10 05:19:52.621023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:59.514 [2024-11-10 05:19:52.621031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:59.514 [2024-11-10 05:19:52.621041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:59.514 [2024-11-10 05:19:52.621049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:59.514 [2024-11-10 05:19:52.621059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:59.514 [2024-11-10 05:19:52.621067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:59.514 [2024-11-10 05:19:52.621080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:59.514 [2024-11-10 05:19:52.621088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:59.514 [2024-11-10 05:19:52.621108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:59.514 [2024-11-10 05:19:52.621115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:59.514 [2024-11-10 05:19:52.621125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:59.514 [2024-11-10 05:19:52.621132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:59.514 [2024-11-10 05:19:52.621141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:59.514 [2024-11-10 05:19:52.621149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:59.514 [2024-11-10 05:19:52.621159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:59.514 [2024-11-10 05:19:52.621166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:59.514 [2024-11-10 05:19:52.621175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:59.514 [2024-11-10 05:19:52.621182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:59.514 [2024-11-10 05:19:52.621192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:59.515 [2024-11-10 05:19:52.621199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:59.515 [2024-11-10 05:19:52.621209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:59.515 [2024-11-10 05:19:52.621217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:59.515 [2024-11-10 05:19:52.621239] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:59.515 [2024-11-10 05:19:52.621249] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 009e53e2-83fe-4f5e-b934-8f419491318d 00:17:59.515 [2024-11-10 05:19:52.621260] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:59.515 [2024-11-10 05:19:52.621267] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:59.515 [2024-11-10 05:19:52.621278] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:59.515 [2024-11-10 05:19:52.621285] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:59.515 [2024-11-10 05:19:52.621295] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:59.515 [2024-11-10 05:19:52.621303] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:59.515 [2024-11-10 05:19:52.621313] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:59.515 [2024-11-10 05:19:52.621319] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:59.515 [2024-11-10 05:19:52.621328] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:59.515 [2024-11-10 05:19:52.621335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.515 [2024-11-10 05:19:52.621347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:59.515 [2024-11-10 05:19:52.621356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.066 ms 00:17:59.515 [2024-11-10 05:19:52.621366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.515 [2024-11-10 05:19:52.623565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.515 [2024-11-10 05:19:52.623602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:59.515 [2024-11-10 05:19:52.623612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.180 ms 00:17:59.515 [2024-11-10 05:19:52.623622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.515 [2024-11-10 05:19:52.623739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.515 [2024-11-10 05:19:52.623750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:59.515 [2024-11-10 05:19:52.623760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:17:59.515 [2024-11-10 05:19:52.623773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.515 [2024-11-10 05:19:52.631572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.515 [2024-11-10 05:19:52.631623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:59.515 [2024-11-10 05:19:52.631634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.515 [2024-11-10 05:19:52.631644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.515 [2024-11-10 05:19:52.631711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.515 [2024-11-10 05:19:52.631722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:59.515 [2024-11-10 05:19:52.631730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.515 [2024-11-10 05:19:52.631741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.515 [2024-11-10 05:19:52.631800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.515 [2024-11-10 05:19:52.631817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:59.515 [2024-11-10 05:19:52.631825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.515 [2024-11-10 05:19:52.631835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.515 [2024-11-10 05:19:52.631853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.515 [2024-11-10 05:19:52.631867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:59.515 [2024-11-10 05:19:52.631874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.515 [2024-11-10 05:19:52.631884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.515 [2024-11-10 05:19:52.645408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.515 [2024-11-10 05:19:52.645467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:59.515 [2024-11-10 05:19:52.645478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.515 [2024-11-10 05:19:52.645489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.515 [2024-11-10 05:19:52.657031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.515 [2024-11-10 05:19:52.657087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:59.515 [2024-11-10 05:19:52.657099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.515 [2024-11-10 05:19:52.657114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.515 [2024-11-10 05:19:52.657194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.515 [2024-11-10 05:19:52.657210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:59.515 [2024-11-10 05:19:52.657219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.515 [2024-11-10 05:19:52.657230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.515 [2024-11-10 05:19:52.657277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.515 [2024-11-10 05:19:52.657290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:59.515 [2024-11-10 05:19:52.657305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.515 [2024-11-10 05:19:52.657316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.515 [2024-11-10 05:19:52.657390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.515 [2024-11-10 05:19:52.657403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:59.515 [2024-11-10 05:19:52.657412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.515 [2024-11-10 05:19:52.657423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.515 [2024-11-10 05:19:52.657459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.515 [2024-11-10 05:19:52.657472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:59.515 [2024-11-10 05:19:52.657480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.515 [2024-11-10 05:19:52.657493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.515 [2024-11-10 05:19:52.657535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.515 [2024-11-10 05:19:52.657549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:59.515 [2024-11-10 05:19:52.657559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.515 [2024-11-10 05:19:52.657569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.515 [2024-11-10 05:19:52.657617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.515 [2024-11-10 05:19:52.657630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:59.515 [2024-11-10 05:19:52.657642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.515 [2024-11-10 05:19:52.657660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.515 [2024-11-10 05:19:52.657808] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 67.708 ms, result 0 00:17:59.515 true 00:17:59.515 05:19:52 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 86173 00:17:59.515 05:19:52 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 86173 ']' 00:17:59.515 05:19:52 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 86173 00:17:59.515 05:19:52 ftl.ftl_restore -- common/autotest_common.sh@955 -- # uname 00:17:59.515 05:19:52 ftl.ftl_restore -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:59.515 05:19:52 ftl.ftl_restore -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86173 00:17:59.515 killing process with pid 86173 00:17:59.515 05:19:52 ftl.ftl_restore -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:59.515 05:19:52 ftl.ftl_restore -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:59.515 05:19:52 ftl.ftl_restore -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86173' 00:17:59.515 05:19:52 ftl.ftl_restore -- common/autotest_common.sh@969 -- # kill 86173 00:17:59.515 05:19:52 ftl.ftl_restore -- common/autotest_common.sh@974 -- # wait 86173 00:18:04.806 05:19:57 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:18:09.012 262144+0 records in 00:18:09.012 262144+0 records out 00:18:09.012 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.73543 s, 287 MB/s 00:18:09.012 05:20:01 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:18:10.445 05:20:03 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:10.445 [2024-11-10 05:20:03.581176] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:18:10.445 [2024-11-10 05:20:03.581271] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86384 ] 00:18:10.706 [2024-11-10 05:20:03.720762] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.706 [2024-11-10 05:20:03.754317] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:10.706 [2024-11-10 05:20:03.846440] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:10.706 [2024-11-10 05:20:03.846513] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:10.970 [2024-11-10 05:20:04.006942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.970 [2024-11-10 05:20:04.007024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:10.970 [2024-11-10 05:20:04.007043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:10.970 [2024-11-10 05:20:04.007056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.970 [2024-11-10 05:20:04.007112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.970 [2024-11-10 05:20:04.007123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:10.970 [2024-11-10 05:20:04.007132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:18:10.970 [2024-11-10 05:20:04.007145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.970 [2024-11-10 05:20:04.007167] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:10.970 [2024-11-10 05:20:04.007461] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:10.970 [2024-11-10 05:20:04.007483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.970 [2024-11-10 05:20:04.007492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:10.970 [2024-11-10 05:20:04.007504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.321 ms 00:18:10.970 [2024-11-10 05:20:04.007515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.970 [2024-11-10 05:20:04.009189] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:10.970 [2024-11-10 05:20:04.012742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.970 [2024-11-10 05:20:04.012791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:10.970 [2024-11-10 05:20:04.012803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.555 ms 00:18:10.970 [2024-11-10 05:20:04.012811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.970 [2024-11-10 05:20:04.012886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.970 [2024-11-10 05:20:04.012895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:10.970 [2024-11-10 05:20:04.012907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:18:10.970 [2024-11-10 05:20:04.012915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.970 [2024-11-10 05:20:04.020796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.970 [2024-11-10 05:20:04.020839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:10.970 [2024-11-10 05:20:04.020849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.840 ms 00:18:10.970 [2024-11-10 05:20:04.020863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.970 [2024-11-10 05:20:04.020959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.970 [2024-11-10 05:20:04.020970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:10.970 [2024-11-10 05:20:04.020979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:18:10.970 [2024-11-10 05:20:04.020986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.970 [2024-11-10 05:20:04.021066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.970 [2024-11-10 05:20:04.021076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:10.970 [2024-11-10 05:20:04.021085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:10.970 [2024-11-10 05:20:04.021097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.970 [2024-11-10 05:20:04.021126] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:10.970 [2024-11-10 05:20:04.023054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.970 [2024-11-10 05:20:04.023089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:10.970 [2024-11-10 05:20:04.023099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.937 ms 00:18:10.970 [2024-11-10 05:20:04.023115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.970 [2024-11-10 05:20:04.023156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.970 [2024-11-10 05:20:04.023165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:10.970 [2024-11-10 05:20:04.023174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:10.970 [2024-11-10 05:20:04.023182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.970 [2024-11-10 05:20:04.023208] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:10.970 [2024-11-10 05:20:04.023237] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:10.970 [2024-11-10 05:20:04.023277] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:10.970 [2024-11-10 05:20:04.023296] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:10.970 [2024-11-10 05:20:04.023403] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:10.970 [2024-11-10 05:20:04.023413] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:10.970 [2024-11-10 05:20:04.023424] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:10.970 [2024-11-10 05:20:04.023435] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:10.970 [2024-11-10 05:20:04.023447] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:10.970 [2024-11-10 05:20:04.023460] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:10.970 [2024-11-10 05:20:04.023468] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:10.970 [2024-11-10 05:20:04.023476] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:10.970 [2024-11-10 05:20:04.023483] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:10.970 [2024-11-10 05:20:04.023491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.970 [2024-11-10 05:20:04.023503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:10.970 [2024-11-10 05:20:04.023511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.286 ms 00:18:10.970 [2024-11-10 05:20:04.023518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.970 [2024-11-10 05:20:04.023603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.970 [2024-11-10 05:20:04.023614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:10.970 [2024-11-10 05:20:04.023621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:18:10.970 [2024-11-10 05:20:04.023632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.970 [2024-11-10 05:20:04.023731] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:10.970 [2024-11-10 05:20:04.023743] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:10.970 [2024-11-10 05:20:04.023753] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:10.970 [2024-11-10 05:20:04.023770] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:10.970 [2024-11-10 05:20:04.023783] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:10.970 [2024-11-10 05:20:04.023792] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:10.970 [2024-11-10 05:20:04.023800] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:10.970 [2024-11-10 05:20:04.023809] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:10.970 [2024-11-10 05:20:04.023818] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:10.970 [2024-11-10 05:20:04.023826] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:10.970 [2024-11-10 05:20:04.023835] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:10.970 [2024-11-10 05:20:04.023842] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:10.970 [2024-11-10 05:20:04.023852] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:10.970 [2024-11-10 05:20:04.023860] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:10.970 [2024-11-10 05:20:04.023868] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:10.970 [2024-11-10 05:20:04.023877] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:10.970 [2024-11-10 05:20:04.023885] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:10.970 [2024-11-10 05:20:04.023897] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:10.970 [2024-11-10 05:20:04.023905] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:10.970 [2024-11-10 05:20:04.023913] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:10.970 [2024-11-10 05:20:04.023921] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:10.970 [2024-11-10 05:20:04.023929] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:10.970 [2024-11-10 05:20:04.023937] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:10.970 [2024-11-10 05:20:04.023959] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:10.970 [2024-11-10 05:20:04.023968] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:10.970 [2024-11-10 05:20:04.023976] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:10.970 [2024-11-10 05:20:04.023984] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:10.971 [2024-11-10 05:20:04.024016] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:10.971 [2024-11-10 05:20:04.024029] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:10.971 [2024-11-10 05:20:04.024037] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:10.971 [2024-11-10 05:20:04.024045] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:10.971 [2024-11-10 05:20:04.024054] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:10.971 [2024-11-10 05:20:04.024062] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:10.971 [2024-11-10 05:20:04.024070] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:10.971 [2024-11-10 05:20:04.024078] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:10.971 [2024-11-10 05:20:04.024085] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:10.971 [2024-11-10 05:20:04.024092] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:10.971 [2024-11-10 05:20:04.024098] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:10.971 [2024-11-10 05:20:04.024105] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:10.971 [2024-11-10 05:20:04.024112] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:10.971 [2024-11-10 05:20:04.024119] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:10.971 [2024-11-10 05:20:04.024126] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:10.971 [2024-11-10 05:20:04.024133] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:10.971 [2024-11-10 05:20:04.024140] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:10.971 [2024-11-10 05:20:04.024158] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:10.971 [2024-11-10 05:20:04.024169] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:10.971 [2024-11-10 05:20:04.024178] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:10.971 [2024-11-10 05:20:04.024186] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:10.971 [2024-11-10 05:20:04.024193] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:10.971 [2024-11-10 05:20:04.024201] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:10.971 [2024-11-10 05:20:04.024208] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:10.971 [2024-11-10 05:20:04.024214] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:10.971 [2024-11-10 05:20:04.024221] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:10.971 [2024-11-10 05:20:04.024230] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:10.971 [2024-11-10 05:20:04.024243] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:10.971 [2024-11-10 05:20:04.024253] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:10.971 [2024-11-10 05:20:04.024261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:10.971 [2024-11-10 05:20:04.024268] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:10.971 [2024-11-10 05:20:04.024276] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:10.971 [2024-11-10 05:20:04.024283] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:10.971 [2024-11-10 05:20:04.024293] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:10.971 [2024-11-10 05:20:04.024300] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:10.971 [2024-11-10 05:20:04.024307] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:10.971 [2024-11-10 05:20:04.024314] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:10.971 [2024-11-10 05:20:04.024321] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:10.971 [2024-11-10 05:20:04.024329] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:10.971 [2024-11-10 05:20:04.024336] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:10.971 [2024-11-10 05:20:04.024342] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:10.971 [2024-11-10 05:20:04.024349] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:10.971 [2024-11-10 05:20:04.024356] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:10.971 [2024-11-10 05:20:04.024368] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:10.971 [2024-11-10 05:20:04.024379] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:10.971 [2024-11-10 05:20:04.024387] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:10.971 [2024-11-10 05:20:04.024394] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:10.971 [2024-11-10 05:20:04.024400] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:10.971 [2024-11-10 05:20:04.024408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.971 [2024-11-10 05:20:04.024418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:10.971 [2024-11-10 05:20:04.024426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.746 ms 00:18:10.971 [2024-11-10 05:20:04.024432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.971 [2024-11-10 05:20:04.045798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.971 [2024-11-10 05:20:04.045867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:10.971 [2024-11-10 05:20:04.045895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.320 ms 00:18:10.971 [2024-11-10 05:20:04.045909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.971 [2024-11-10 05:20:04.046063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.971 [2024-11-10 05:20:04.046078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:10.971 [2024-11-10 05:20:04.046090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:18:10.971 [2024-11-10 05:20:04.046101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.971 [2024-11-10 05:20:04.058487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.971 [2024-11-10 05:20:04.058534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:10.971 [2024-11-10 05:20:04.058546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.304 ms 00:18:10.971 [2024-11-10 05:20:04.058553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.971 [2024-11-10 05:20:04.058587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.971 [2024-11-10 05:20:04.058596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:10.971 [2024-11-10 05:20:04.058604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:10.971 [2024-11-10 05:20:04.058612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.971 [2024-11-10 05:20:04.059217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.971 [2024-11-10 05:20:04.059261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:10.971 [2024-11-10 05:20:04.059272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.554 ms 00:18:10.971 [2024-11-10 05:20:04.059282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.971 [2024-11-10 05:20:04.059427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.971 [2024-11-10 05:20:04.059438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:10.971 [2024-11-10 05:20:04.059448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:18:10.971 [2024-11-10 05:20:04.059457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.971 [2024-11-10 05:20:04.066232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.971 [2024-11-10 05:20:04.066275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:10.971 [2024-11-10 05:20:04.066293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.752 ms 00:18:10.971 [2024-11-10 05:20:04.066301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.971 [2024-11-10 05:20:04.069644] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:18:10.971 [2024-11-10 05:20:04.069699] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:10.971 [2024-11-10 05:20:04.069715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.971 [2024-11-10 05:20:04.069723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:10.971 [2024-11-10 05:20:04.069732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.320 ms 00:18:10.971 [2024-11-10 05:20:04.069739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.971 [2024-11-10 05:20:04.085747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.971 [2024-11-10 05:20:04.085827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:10.971 [2024-11-10 05:20:04.085843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.956 ms 00:18:10.971 [2024-11-10 05:20:04.085854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.971 [2024-11-10 05:20:04.088716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.971 [2024-11-10 05:20:04.088903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:10.971 [2024-11-10 05:20:04.088923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.796 ms 00:18:10.971 [2024-11-10 05:20:04.088931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.971 [2024-11-10 05:20:04.091136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.971 [2024-11-10 05:20:04.091182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:10.971 [2024-11-10 05:20:04.091192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.166 ms 00:18:10.971 [2024-11-10 05:20:04.091199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.971 [2024-11-10 05:20:04.091545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.971 [2024-11-10 05:20:04.091563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:10.971 [2024-11-10 05:20:04.091574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:18:10.971 [2024-11-10 05:20:04.091581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.972 [2024-11-10 05:20:04.115770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.972 [2024-11-10 05:20:04.115836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:10.972 [2024-11-10 05:20:04.115862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.171 ms 00:18:10.972 [2024-11-10 05:20:04.115871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.972 [2024-11-10 05:20:04.124251] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:10.972 [2024-11-10 05:20:04.127346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.972 [2024-11-10 05:20:04.127387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:10.972 [2024-11-10 05:20:04.127411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.424 ms 00:18:10.972 [2024-11-10 05:20:04.127423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.972 [2024-11-10 05:20:04.127503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.972 [2024-11-10 05:20:04.127515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:10.972 [2024-11-10 05:20:04.127525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:10.972 [2024-11-10 05:20:04.127533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.972 [2024-11-10 05:20:04.127603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.972 [2024-11-10 05:20:04.127613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:10.972 [2024-11-10 05:20:04.127622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:18:10.972 [2024-11-10 05:20:04.127630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.972 [2024-11-10 05:20:04.127658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.972 [2024-11-10 05:20:04.127667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:10.972 [2024-11-10 05:20:04.127676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:10.972 [2024-11-10 05:20:04.127684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.972 [2024-11-10 05:20:04.127722] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:10.972 [2024-11-10 05:20:04.127736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.972 [2024-11-10 05:20:04.127747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:10.972 [2024-11-10 05:20:04.127756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:10.972 [2024-11-10 05:20:04.127764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.972 [2024-11-10 05:20:04.133138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.972 [2024-11-10 05:20:04.133192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:10.972 [2024-11-10 05:20:04.133209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.355 ms 00:18:10.972 [2024-11-10 05:20:04.133217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.972 [2024-11-10 05:20:04.133297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.972 [2024-11-10 05:20:04.133307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:10.972 [2024-11-10 05:20:04.133317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:18:10.972 [2024-11-10 05:20:04.133325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.972 [2024-11-10 05:20:04.134435] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 127.012 ms, result 0 00:18:11.915  [2024-11-10T05:20:06.537Z] Copying: 17/1024 [MB] (17 MBps) [2024-11-10T05:20:07.481Z] Copying: 33/1024 [MB] (15 MBps) [2024-11-10T05:20:08.426Z] Copying: 54/1024 [MB] (21 MBps) [2024-11-10T05:20:09.368Z] Copying: 66/1024 [MB] (12 MBps) [2024-11-10T05:20:10.311Z] Copying: 82/1024 [MB] (16 MBps) [2024-11-10T05:20:11.254Z] Copying: 100/1024 [MB] (17 MBps) [2024-11-10T05:20:12.198Z] Copying: 111/1024 [MB] (10 MBps) [2024-11-10T05:20:13.584Z] Copying: 124452/1048576 [kB] (10096 kBps) [2024-11-10T05:20:14.155Z] Copying: 140/1024 [MB] (18 MBps) [2024-11-10T05:20:15.542Z] Copying: 169/1024 [MB] (29 MBps) [2024-11-10T05:20:16.486Z] Copying: 180/1024 [MB] (10 MBps) [2024-11-10T05:20:17.428Z] Copying: 191/1024 [MB] (10 MBps) [2024-11-10T05:20:18.372Z] Copying: 212/1024 [MB] (20 MBps) [2024-11-10T05:20:19.344Z] Copying: 224/1024 [MB] (12 MBps) [2024-11-10T05:20:20.285Z] Copying: 235/1024 [MB] (10 MBps) [2024-11-10T05:20:21.228Z] Copying: 245/1024 [MB] (10 MBps) [2024-11-10T05:20:22.172Z] Copying: 261920/1048576 [kB] (10112 kBps) [2024-11-10T05:20:23.559Z] Copying: 266/1024 [MB] (11 MBps) [2024-11-10T05:20:24.503Z] Copying: 277/1024 [MB] (10 MBps) [2024-11-10T05:20:25.444Z] Copying: 292/1024 [MB] (14 MBps) [2024-11-10T05:20:26.388Z] Copying: 302/1024 [MB] (10 MBps) [2024-11-10T05:20:27.331Z] Copying: 313/1024 [MB] (10 MBps) [2024-11-10T05:20:28.274Z] Copying: 350/1024 [MB] (37 MBps) [2024-11-10T05:20:29.219Z] Copying: 367/1024 [MB] (16 MBps) [2024-11-10T05:20:30.161Z] Copying: 390/1024 [MB] (23 MBps) [2024-11-10T05:20:31.549Z] Copying: 420/1024 [MB] (29 MBps) [2024-11-10T05:20:32.507Z] Copying: 444/1024 [MB] (23 MBps) [2024-11-10T05:20:33.466Z] Copying: 456/1024 [MB] (12 MBps) [2024-11-10T05:20:34.411Z] Copying: 483/1024 [MB] (26 MBps) [2024-11-10T05:20:35.354Z] Copying: 499/1024 [MB] (15 MBps) [2024-11-10T05:20:36.300Z] Copying: 523/1024 [MB] (24 MBps) [2024-11-10T05:20:37.244Z] Copying: 541/1024 [MB] (17 MBps) [2024-11-10T05:20:38.187Z] Copying: 570/1024 [MB] (29 MBps) [2024-11-10T05:20:39.575Z] Copying: 603/1024 [MB] (32 MBps) [2024-11-10T05:20:40.147Z] Copying: 633/1024 [MB] (29 MBps) [2024-11-10T05:20:41.533Z] Copying: 662/1024 [MB] (28 MBps) [2024-11-10T05:20:42.875Z] Copying: 712/1024 [MB] (50 MBps) [2024-11-10T05:20:43.446Z] Copying: 733/1024 [MB] (20 MBps) [2024-11-10T05:20:44.388Z] Copying: 755/1024 [MB] (22 MBps) [2024-11-10T05:20:45.331Z] Copying: 772/1024 [MB] (16 MBps) [2024-11-10T05:20:46.273Z] Copying: 793/1024 [MB] (20 MBps) [2024-11-10T05:20:47.219Z] Copying: 810/1024 [MB] (16 MBps) [2024-11-10T05:20:48.165Z] Copying: 828/1024 [MB] (18 MBps) [2024-11-10T05:20:49.552Z] Copying: 848/1024 [MB] (20 MBps) [2024-11-10T05:20:50.496Z] Copying: 866/1024 [MB] (17 MBps) [2024-11-10T05:20:51.439Z] Copying: 888/1024 [MB] (22 MBps) [2024-11-10T05:20:52.383Z] Copying: 914/1024 [MB] (26 MBps) [2024-11-10T05:20:53.326Z] Copying: 931/1024 [MB] (17 MBps) [2024-11-10T05:20:54.269Z] Copying: 984/1024 [MB] (52 MBps) [2024-11-10T05:20:55.215Z] Copying: 1003/1024 [MB] (18 MBps) [2024-11-10T05:20:55.215Z] Copying: 1024/1024 [MB] (average 20 MBps)[2024-11-10 05:20:55.010298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.979 [2024-11-10 05:20:55.010337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:01.979 [2024-11-10 05:20:55.010348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:19:01.979 [2024-11-10 05:20:55.010355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.979 [2024-11-10 05:20:55.010371] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:01.979 [2024-11-10 05:20:55.010751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.979 [2024-11-10 05:20:55.010770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:01.979 [2024-11-10 05:20:55.010777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.367 ms 00:19:01.979 [2024-11-10 05:20:55.010788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.979 [2024-11-10 05:20:55.012062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.979 [2024-11-10 05:20:55.012081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:01.979 [2024-11-10 05:20:55.012089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.259 ms 00:19:01.979 [2024-11-10 05:20:55.012095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.979 [2024-11-10 05:20:55.023512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.979 [2024-11-10 05:20:55.023543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:01.979 [2024-11-10 05:20:55.023552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.405 ms 00:19:01.979 [2024-11-10 05:20:55.023558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.979 [2024-11-10 05:20:55.028338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.979 [2024-11-10 05:20:55.028359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:01.979 [2024-11-10 05:20:55.028367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.759 ms 00:19:01.979 [2024-11-10 05:20:55.028374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.979 [2024-11-10 05:20:55.029224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.979 [2024-11-10 05:20:55.029328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:01.979 [2024-11-10 05:20:55.029339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.810 ms 00:19:01.979 [2024-11-10 05:20:55.029345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.979 [2024-11-10 05:20:55.032562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.979 [2024-11-10 05:20:55.032662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:01.979 [2024-11-10 05:20:55.032674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.195 ms 00:19:01.979 [2024-11-10 05:20:55.032685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.979 [2024-11-10 05:20:55.032769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.979 [2024-11-10 05:20:55.032777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:01.979 [2024-11-10 05:20:55.032784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:19:01.979 [2024-11-10 05:20:55.032793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.979 [2024-11-10 05:20:55.034394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.979 [2024-11-10 05:20:55.034415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:01.979 [2024-11-10 05:20:55.034422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.590 ms 00:19:01.979 [2024-11-10 05:20:55.034427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.979 [2024-11-10 05:20:55.035830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.979 [2024-11-10 05:20:55.035926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:01.979 [2024-11-10 05:20:55.035936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.381 ms 00:19:01.979 [2024-11-10 05:20:55.035941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.979 [2024-11-10 05:20:55.036820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.979 [2024-11-10 05:20:55.036842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:01.979 [2024-11-10 05:20:55.036848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.858 ms 00:19:01.979 [2024-11-10 05:20:55.036853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.979 [2024-11-10 05:20:55.037669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.979 [2024-11-10 05:20:55.037695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:01.979 [2024-11-10 05:20:55.037701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.777 ms 00:19:01.979 [2024-11-10 05:20:55.037706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.979 [2024-11-10 05:20:55.037727] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:01.979 [2024-11-10 05:20:55.037738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:01.979 [2024-11-10 05:20:55.037749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:01.979 [2024-11-10 05:20:55.037755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:01.979 [2024-11-10 05:20:55.037761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:01.979 [2024-11-10 05:20:55.037767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:01.979 [2024-11-10 05:20:55.037773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:01.979 [2024-11-10 05:20:55.037779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:01.979 [2024-11-10 05:20:55.037784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:01.979 [2024-11-10 05:20:55.037790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:01.979 [2024-11-10 05:20:55.037796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:01.979 [2024-11-10 05:20:55.037802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:01.979 [2024-11-10 05:20:55.037807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:01.979 [2024-11-10 05:20:55.037813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:01.979 [2024-11-10 05:20:55.037819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:01.979 [2024-11-10 05:20:55.037824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:01.979 [2024-11-10 05:20:55.037830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:01.979 [2024-11-10 05:20:55.037836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:01.979 [2024-11-10 05:20:55.037841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:01.979 [2024-11-10 05:20:55.037847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:01.979 [2024-11-10 05:20:55.037852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:01.979 [2024-11-10 05:20:55.037858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:01.979 [2024-11-10 05:20:55.037864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:01.979 [2024-11-10 05:20:55.037869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:01.979 [2024-11-10 05:20:55.037875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:01.979 [2024-11-10 05:20:55.037880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:01.979 [2024-11-10 05:20:55.037886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:01.979 [2024-11-10 05:20:55.037979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:01.979 [2024-11-10 05:20:55.037985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:01.979 [2024-11-10 05:20:55.038004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:01.979 [2024-11-10 05:20:55.038011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:01.979 [2024-11-10 05:20:55.038017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:01.979 [2024-11-10 05:20:55.038022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:01.979 [2024-11-10 05:20:55.038028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:01.979 [2024-11-10 05:20:55.038034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:01.979 [2024-11-10 05:20:55.038040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:01.979 [2024-11-10 05:20:55.038045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:01.979 [2024-11-10 05:20:55.038051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:01.980 [2024-11-10 05:20:55.038057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:01.980 [2024-11-10 05:20:55.038063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:01.980 [2024-11-10 05:20:55.038069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:01.980 [2024-11-10 05:20:55.038075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:01.980 [2024-11-10 05:20:55.038080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:01.980 [2024-11-10 05:20:55.038086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:01.980 [2024-11-10 05:20:55.038091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:01.980 [2024-11-10 05:20:55.038097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:01.980 [2024-11-10 05:20:55.038103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:01.980 [2024-11-10 05:20:55.038108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:01.980 [2024-11-10 05:20:55.038114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:01.980 [2024-11-10 05:20:55.038120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:01.980 [2024-11-10 05:20:55.038125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:01.980 [2024-11-10 05:20:55.038131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:01.980 [2024-11-10 05:20:55.038136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:01.980 [2024-11-10 05:20:55.038142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:01.980 [2024-11-10 05:20:55.038147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:01.980 [2024-11-10 05:20:55.038153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:01.980 [2024-11-10 05:20:55.038159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:01.980 [2024-11-10 05:20:55.038165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:01.980 [2024-11-10 05:20:55.038171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:01.980 [2024-11-10 05:20:55.038176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:01.980 [2024-11-10 05:20:55.038182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:01.980 [2024-11-10 05:20:55.038188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:01.980 [2024-11-10 05:20:55.038193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:01.980 [2024-11-10 05:20:55.038199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:01.980 [2024-11-10 05:20:55.038204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:01.980 [2024-11-10 05:20:55.038210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:01.980 [2024-11-10 05:20:55.038216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:01.980 [2024-11-10 05:20:55.038222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:01.980 [2024-11-10 05:20:55.038228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:01.980 [2024-11-10 05:20:55.038233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:01.980 [2024-11-10 05:20:55.038239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:01.980 [2024-11-10 05:20:55.038245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:01.980 [2024-11-10 05:20:55.038250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:01.980 [2024-11-10 05:20:55.038256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:01.980 [2024-11-10 05:20:55.038262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:01.980 [2024-11-10 05:20:55.038268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:01.980 [2024-11-10 05:20:55.038273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:01.980 [2024-11-10 05:20:55.038278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:01.980 [2024-11-10 05:20:55.038284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:01.980 [2024-11-10 05:20:55.038290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:01.980 [2024-11-10 05:20:55.038296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:01.980 [2024-11-10 05:20:55.038301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:01.980 [2024-11-10 05:20:55.038307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:01.980 [2024-11-10 05:20:55.038313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:01.980 [2024-11-10 05:20:55.038318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:01.980 [2024-11-10 05:20:55.038324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:01.980 [2024-11-10 05:20:55.038330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:01.980 [2024-11-10 05:20:55.038335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:01.980 [2024-11-10 05:20:55.038341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:01.980 [2024-11-10 05:20:55.038346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:01.980 [2024-11-10 05:20:55.038352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:01.980 [2024-11-10 05:20:55.038358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:01.980 [2024-11-10 05:20:55.038363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:01.980 [2024-11-10 05:20:55.038368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:01.980 [2024-11-10 05:20:55.038374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:01.980 [2024-11-10 05:20:55.038380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:01.980 [2024-11-10 05:20:55.038386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:01.980 [2024-11-10 05:20:55.038391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:01.980 [2024-11-10 05:20:55.038396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:01.980 [2024-11-10 05:20:55.038403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:01.980 [2024-11-10 05:20:55.038409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:01.980 [2024-11-10 05:20:55.038421] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:01.980 [2024-11-10 05:20:55.038427] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 009e53e2-83fe-4f5e-b934-8f419491318d 00:19:01.980 [2024-11-10 05:20:55.038433] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:01.980 [2024-11-10 05:20:55.038439] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:01.980 [2024-11-10 05:20:55.038444] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:01.980 [2024-11-10 05:20:55.038450] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:01.980 [2024-11-10 05:20:55.038455] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:01.980 [2024-11-10 05:20:55.038460] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:01.980 [2024-11-10 05:20:55.038466] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:01.980 [2024-11-10 05:20:55.038470] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:01.980 [2024-11-10 05:20:55.038476] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:01.980 [2024-11-10 05:20:55.038481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.980 [2024-11-10 05:20:55.038486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:01.980 [2024-11-10 05:20:55.038492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.754 ms 00:19:01.980 [2024-11-10 05:20:55.038504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.980 [2024-11-10 05:20:55.039880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.980 [2024-11-10 05:20:55.039899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:01.980 [2024-11-10 05:20:55.039907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.365 ms 00:19:01.980 [2024-11-10 05:20:55.039912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.980 [2024-11-10 05:20:55.040010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.980 [2024-11-10 05:20:55.040017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:01.980 [2024-11-10 05:20:55.040027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:19:01.980 [2024-11-10 05:20:55.040032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.980 [2024-11-10 05:20:55.043654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.980 [2024-11-10 05:20:55.043679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:01.980 [2024-11-10 05:20:55.043686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.980 [2024-11-10 05:20:55.043692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.980 [2024-11-10 05:20:55.043730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.980 [2024-11-10 05:20:55.043736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:01.981 [2024-11-10 05:20:55.043745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.981 [2024-11-10 05:20:55.043751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.981 [2024-11-10 05:20:55.043777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.981 [2024-11-10 05:20:55.043783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:01.981 [2024-11-10 05:20:55.043789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.981 [2024-11-10 05:20:55.043794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.981 [2024-11-10 05:20:55.043805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.981 [2024-11-10 05:20:55.043811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:01.981 [2024-11-10 05:20:55.043817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.981 [2024-11-10 05:20:55.043824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.981 [2024-11-10 05:20:55.051187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.981 [2024-11-10 05:20:55.051218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:01.981 [2024-11-10 05:20:55.051226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.981 [2024-11-10 05:20:55.051232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.981 [2024-11-10 05:20:55.057216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.981 [2024-11-10 05:20:55.057255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:01.981 [2024-11-10 05:20:55.057266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.981 [2024-11-10 05:20:55.057272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.981 [2024-11-10 05:20:55.057305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.981 [2024-11-10 05:20:55.057312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:01.981 [2024-11-10 05:20:55.057318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.981 [2024-11-10 05:20:55.057324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.981 [2024-11-10 05:20:55.057346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.981 [2024-11-10 05:20:55.057352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:01.981 [2024-11-10 05:20:55.057358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.981 [2024-11-10 05:20:55.057364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.981 [2024-11-10 05:20:55.057415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.981 [2024-11-10 05:20:55.057422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:01.981 [2024-11-10 05:20:55.057428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.981 [2024-11-10 05:20:55.057434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.981 [2024-11-10 05:20:55.057457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.981 [2024-11-10 05:20:55.057464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:01.981 [2024-11-10 05:20:55.057470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.981 [2024-11-10 05:20:55.057475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.981 [2024-11-10 05:20:55.057506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.981 [2024-11-10 05:20:55.057513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:01.981 [2024-11-10 05:20:55.057519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.981 [2024-11-10 05:20:55.057524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.981 [2024-11-10 05:20:55.057555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.981 [2024-11-10 05:20:55.057562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:01.981 [2024-11-10 05:20:55.057568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.981 [2024-11-10 05:20:55.057574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.981 [2024-11-10 05:20:55.057663] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 47.344 ms, result 0 00:19:02.242 00:19:02.242 00:19:02.242 05:20:55 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:19:02.503 [2024-11-10 05:20:55.497336] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:19:02.503 [2024-11-10 05:20:55.497623] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86928 ] 00:19:02.503 [2024-11-10 05:20:55.647926] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:02.503 [2024-11-10 05:20:55.698981] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:02.766 [2024-11-10 05:20:55.813892] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:02.766 [2024-11-10 05:20:55.813979] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:02.766 [2024-11-10 05:20:55.975753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.766 [2024-11-10 05:20:55.975812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:02.766 [2024-11-10 05:20:55.975836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:02.766 [2024-11-10 05:20:55.975844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.766 [2024-11-10 05:20:55.975900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.766 [2024-11-10 05:20:55.975915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:02.766 [2024-11-10 05:20:55.975925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:19:02.766 [2024-11-10 05:20:55.975933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.766 [2024-11-10 05:20:55.975956] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:02.766 [2024-11-10 05:20:55.976283] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:02.766 [2024-11-10 05:20:55.976302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.766 [2024-11-10 05:20:55.976310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:02.766 [2024-11-10 05:20:55.976324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.351 ms 00:19:02.766 [2024-11-10 05:20:55.976338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.766 [2024-11-10 05:20:55.977949] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:02.766 [2024-11-10 05:20:55.981892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.766 [2024-11-10 05:20:55.981945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:02.766 [2024-11-10 05:20:55.981958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.945 ms 00:19:02.766 [2024-11-10 05:20:55.981966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.766 [2024-11-10 05:20:55.982069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.766 [2024-11-10 05:20:55.982086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:02.766 [2024-11-10 05:20:55.982099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:19:02.766 [2024-11-10 05:20:55.982110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.766 [2024-11-10 05:20:55.990209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.766 [2024-11-10 05:20:55.990252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:02.766 [2024-11-10 05:20:55.990263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.055 ms 00:19:02.766 [2024-11-10 05:20:55.990277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.766 [2024-11-10 05:20:55.990383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.766 [2024-11-10 05:20:55.990394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:02.766 [2024-11-10 05:20:55.990403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:19:02.766 [2024-11-10 05:20:55.990411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.766 [2024-11-10 05:20:55.990477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.766 [2024-11-10 05:20:55.990492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:02.766 [2024-11-10 05:20:55.990500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:02.766 [2024-11-10 05:20:55.990508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.766 [2024-11-10 05:20:55.990535] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:02.766 [2024-11-10 05:20:55.992560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.766 [2024-11-10 05:20:55.992736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:02.766 [2024-11-10 05:20:55.992754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.034 ms 00:19:02.766 [2024-11-10 05:20:55.992762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.766 [2024-11-10 05:20:55.992801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.766 [2024-11-10 05:20:55.992810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:02.766 [2024-11-10 05:20:55.992818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:02.766 [2024-11-10 05:20:55.992826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.766 [2024-11-10 05:20:55.992849] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:02.767 [2024-11-10 05:20:55.992879] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:02.767 [2024-11-10 05:20:55.992916] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:02.767 [2024-11-10 05:20:55.992937] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:02.767 [2024-11-10 05:20:55.993063] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:02.767 [2024-11-10 05:20:55.993076] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:02.767 [2024-11-10 05:20:55.993087] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:02.767 [2024-11-10 05:20:55.993099] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:02.767 [2024-11-10 05:20:55.993112] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:02.767 [2024-11-10 05:20:55.993124] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:02.767 [2024-11-10 05:20:55.993132] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:02.767 [2024-11-10 05:20:55.993140] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:02.767 [2024-11-10 05:20:55.993148] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:02.767 [2024-11-10 05:20:55.993157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.767 [2024-11-10 05:20:55.993169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:02.767 [2024-11-10 05:20:55.993177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:19:02.767 [2024-11-10 05:20:55.993185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.767 [2024-11-10 05:20:55.993271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.767 [2024-11-10 05:20:55.993283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:02.767 [2024-11-10 05:20:55.993293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:02.767 [2024-11-10 05:20:55.993301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.767 [2024-11-10 05:20:55.993401] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:02.767 [2024-11-10 05:20:55.993413] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:02.767 [2024-11-10 05:20:55.993422] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:02.767 [2024-11-10 05:20:55.993439] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:02.767 [2024-11-10 05:20:55.993448] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:02.767 [2024-11-10 05:20:55.993455] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:02.767 [2024-11-10 05:20:55.993464] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:02.767 [2024-11-10 05:20:55.993472] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:02.767 [2024-11-10 05:20:55.993481] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:02.767 [2024-11-10 05:20:55.993489] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:02.767 [2024-11-10 05:20:55.993497] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:02.767 [2024-11-10 05:20:55.993505] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:02.767 [2024-11-10 05:20:55.993515] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:02.767 [2024-11-10 05:20:55.993523] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:02.767 [2024-11-10 05:20:55.993533] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:02.767 [2024-11-10 05:20:55.993542] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:02.767 [2024-11-10 05:20:55.993550] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:02.767 [2024-11-10 05:20:55.993558] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:02.767 [2024-11-10 05:20:55.993566] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:02.767 [2024-11-10 05:20:55.993574] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:02.767 [2024-11-10 05:20:55.993582] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:02.767 [2024-11-10 05:20:55.993590] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:02.767 [2024-11-10 05:20:55.993598] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:02.767 [2024-11-10 05:20:55.993606] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:02.767 [2024-11-10 05:20:55.993613] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:02.767 [2024-11-10 05:20:55.993622] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:02.767 [2024-11-10 05:20:55.993630] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:02.767 [2024-11-10 05:20:55.993638] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:02.767 [2024-11-10 05:20:55.993651] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:02.767 [2024-11-10 05:20:55.993659] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:02.767 [2024-11-10 05:20:55.993668] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:02.767 [2024-11-10 05:20:55.993677] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:02.767 [2024-11-10 05:20:55.993685] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:02.767 [2024-11-10 05:20:55.993693] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:02.767 [2024-11-10 05:20:55.993700] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:02.767 [2024-11-10 05:20:55.993708] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:02.767 [2024-11-10 05:20:55.993716] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:02.767 [2024-11-10 05:20:55.993724] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:02.767 [2024-11-10 05:20:55.993732] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:02.767 [2024-11-10 05:20:55.993740] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:02.767 [2024-11-10 05:20:55.993747] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:02.767 [2024-11-10 05:20:55.993755] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:02.767 [2024-11-10 05:20:55.993763] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:02.767 [2024-11-10 05:20:55.993771] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:02.767 [2024-11-10 05:20:55.993785] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:02.767 [2024-11-10 05:20:55.993792] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:02.767 [2024-11-10 05:20:55.993802] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:02.767 [2024-11-10 05:20:55.993816] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:02.767 [2024-11-10 05:20:55.993823] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:02.767 [2024-11-10 05:20:55.993831] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:02.767 [2024-11-10 05:20:55.993838] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:02.767 [2024-11-10 05:20:55.993846] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:02.767 [2024-11-10 05:20:55.993852] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:02.767 [2024-11-10 05:20:55.993861] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:02.767 [2024-11-10 05:20:55.993870] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:02.767 [2024-11-10 05:20:55.993880] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:02.767 [2024-11-10 05:20:55.993887] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:02.767 [2024-11-10 05:20:55.993894] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:02.767 [2024-11-10 05:20:55.993902] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:02.767 [2024-11-10 05:20:55.993909] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:02.767 [2024-11-10 05:20:55.993919] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:02.767 [2024-11-10 05:20:55.993926] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:02.767 [2024-11-10 05:20:55.993933] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:02.767 [2024-11-10 05:20:55.993939] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:02.767 [2024-11-10 05:20:55.993947] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:02.767 [2024-11-10 05:20:55.993954] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:02.767 [2024-11-10 05:20:55.993961] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:02.767 [2024-11-10 05:20:55.993968] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:02.767 [2024-11-10 05:20:55.993975] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:02.767 [2024-11-10 05:20:55.993982] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:02.767 [2024-11-10 05:20:55.994030] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:02.767 [2024-11-10 05:20:55.994039] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:02.767 [2024-11-10 05:20:55.994046] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:02.767 [2024-11-10 05:20:55.994054] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:02.767 [2024-11-10 05:20:55.994061] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:02.767 [2024-11-10 05:20:55.994069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.767 [2024-11-10 05:20:55.994080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:02.768 [2024-11-10 05:20:55.994088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.736 ms 00:19:02.768 [2024-11-10 05:20:55.994096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.030 [2024-11-10 05:20:56.016150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.030 [2024-11-10 05:20:56.016218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:03.030 [2024-11-10 05:20:56.016254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.005 ms 00:19:03.030 [2024-11-10 05:20:56.016268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.030 [2024-11-10 05:20:56.016403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.030 [2024-11-10 05:20:56.016417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:03.030 [2024-11-10 05:20:56.016430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:19:03.030 [2024-11-10 05:20:56.016441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.030 [2024-11-10 05:20:56.028585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.030 [2024-11-10 05:20:56.028631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:03.030 [2024-11-10 05:20:56.028643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.060 ms 00:19:03.030 [2024-11-10 05:20:56.028650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.030 [2024-11-10 05:20:56.028686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.030 [2024-11-10 05:20:56.028694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:03.030 [2024-11-10 05:20:56.028703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:03.030 [2024-11-10 05:20:56.028711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.030 [2024-11-10 05:20:56.029274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.030 [2024-11-10 05:20:56.029303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:03.030 [2024-11-10 05:20:56.029315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.511 ms 00:19:03.030 [2024-11-10 05:20:56.029324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.030 [2024-11-10 05:20:56.029467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.030 [2024-11-10 05:20:56.029478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:03.030 [2024-11-10 05:20:56.029488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.120 ms 00:19:03.030 [2024-11-10 05:20:56.029497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.030 [2024-11-10 05:20:56.036384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.030 [2024-11-10 05:20:56.036426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:03.030 [2024-11-10 05:20:56.036445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.863 ms 00:19:03.030 [2024-11-10 05:20:56.036452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.030 [2024-11-10 05:20:56.040269] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:03.030 [2024-11-10 05:20:56.040321] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:03.030 [2024-11-10 05:20:56.040339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.030 [2024-11-10 05:20:56.040348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:03.030 [2024-11-10 05:20:56.040357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.781 ms 00:19:03.030 [2024-11-10 05:20:56.040364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.030 [2024-11-10 05:20:56.056208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.030 [2024-11-10 05:20:56.056272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:03.030 [2024-11-10 05:20:56.056288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.792 ms 00:19:03.030 [2024-11-10 05:20:56.056296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.030 [2024-11-10 05:20:56.059232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.030 [2024-11-10 05:20:56.059280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:03.030 [2024-11-10 05:20:56.059290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.885 ms 00:19:03.030 [2024-11-10 05:20:56.059299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.030 [2024-11-10 05:20:56.062250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.030 [2024-11-10 05:20:56.062440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:03.030 [2024-11-10 05:20:56.062460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.902 ms 00:19:03.030 [2024-11-10 05:20:56.062468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.030 [2024-11-10 05:20:56.062815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.030 [2024-11-10 05:20:56.062831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:03.030 [2024-11-10 05:20:56.062841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:19:03.030 [2024-11-10 05:20:56.062849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.030 [2024-11-10 05:20:56.086801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.030 [2024-11-10 05:20:56.086872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:03.030 [2024-11-10 05:20:56.086892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.934 ms 00:19:03.030 [2024-11-10 05:20:56.086901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.030 [2024-11-10 05:20:56.095121] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:03.030 [2024-11-10 05:20:56.098247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.030 [2024-11-10 05:20:56.098292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:03.030 [2024-11-10 05:20:56.098312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.294 ms 00:19:03.030 [2024-11-10 05:20:56.098320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.030 [2024-11-10 05:20:56.098399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.030 [2024-11-10 05:20:56.098411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:03.030 [2024-11-10 05:20:56.098420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:03.030 [2024-11-10 05:20:56.098429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.030 [2024-11-10 05:20:56.098498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.030 [2024-11-10 05:20:56.098509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:03.030 [2024-11-10 05:20:56.098517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:19:03.030 [2024-11-10 05:20:56.098528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.030 [2024-11-10 05:20:56.098548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.030 [2024-11-10 05:20:56.098557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:03.030 [2024-11-10 05:20:56.098566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:03.030 [2024-11-10 05:20:56.098574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.031 [2024-11-10 05:20:56.098615] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:03.031 [2024-11-10 05:20:56.098629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.031 [2024-11-10 05:20:56.098637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:03.031 [2024-11-10 05:20:56.098646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:03.031 [2024-11-10 05:20:56.098654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.031 [2024-11-10 05:20:56.103916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.031 [2024-11-10 05:20:56.103977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:03.031 [2024-11-10 05:20:56.104011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.242 ms 00:19:03.031 [2024-11-10 05:20:56.104020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.031 [2024-11-10 05:20:56.104105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.031 [2024-11-10 05:20:56.104116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:03.031 [2024-11-10 05:20:56.104126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:19:03.031 [2024-11-10 05:20:56.104134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.031 [2024-11-10 05:20:56.105272] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 129.038 ms, result 0 00:19:04.418  [2024-11-10T05:20:58.598Z] Copying: 15/1024 [MB] (15 MBps) [2024-11-10T05:20:59.539Z] Copying: 36/1024 [MB] (21 MBps) [2024-11-10T05:21:00.482Z] Copying: 50/1024 [MB] (13 MBps) [2024-11-10T05:21:01.439Z] Copying: 68/1024 [MB] (17 MBps) [2024-11-10T05:21:02.387Z] Copying: 90/1024 [MB] (21 MBps) [2024-11-10T05:21:03.329Z] Copying: 111/1024 [MB] (21 MBps) [2024-11-10T05:21:04.716Z] Copying: 133/1024 [MB] (21 MBps) [2024-11-10T05:21:05.287Z] Copying: 155/1024 [MB] (21 MBps) [2024-11-10T05:21:06.675Z] Copying: 170/1024 [MB] (15 MBps) [2024-11-10T05:21:07.618Z] Copying: 186/1024 [MB] (15 MBps) [2024-11-10T05:21:08.563Z] Copying: 205/1024 [MB] (19 MBps) [2024-11-10T05:21:09.508Z] Copying: 223/1024 [MB] (18 MBps) [2024-11-10T05:21:10.452Z] Copying: 240/1024 [MB] (16 MBps) [2024-11-10T05:21:11.396Z] Copying: 262/1024 [MB] (22 MBps) [2024-11-10T05:21:12.340Z] Copying: 281/1024 [MB] (18 MBps) [2024-11-10T05:21:13.284Z] Copying: 298/1024 [MB] (16 MBps) [2024-11-10T05:21:14.672Z] Copying: 317/1024 [MB] (18 MBps) [2024-11-10T05:21:15.622Z] Copying: 339/1024 [MB] (22 MBps) [2024-11-10T05:21:16.598Z] Copying: 351/1024 [MB] (11 MBps) [2024-11-10T05:21:17.542Z] Copying: 361/1024 [MB] (10 MBps) [2024-11-10T05:21:18.483Z] Copying: 371/1024 [MB] (10 MBps) [2024-11-10T05:21:19.427Z] Copying: 382/1024 [MB] (10 MBps) [2024-11-10T05:21:20.370Z] Copying: 392/1024 [MB] (10 MBps) [2024-11-10T05:21:21.312Z] Copying: 403/1024 [MB] (10 MBps) [2024-11-10T05:21:22.696Z] Copying: 416/1024 [MB] (13 MBps) [2024-11-10T05:21:23.640Z] Copying: 428/1024 [MB] (11 MBps) [2024-11-10T05:21:24.582Z] Copying: 439/1024 [MB] (11 MBps) [2024-11-10T05:21:25.526Z] Copying: 463/1024 [MB] (24 MBps) [2024-11-10T05:21:26.470Z] Copying: 488/1024 [MB] (24 MBps) [2024-11-10T05:21:27.414Z] Copying: 512/1024 [MB] (23 MBps) [2024-11-10T05:21:28.357Z] Copying: 531/1024 [MB] (19 MBps) [2024-11-10T05:21:29.301Z] Copying: 547/1024 [MB] (16 MBps) [2024-11-10T05:21:30.723Z] Copying: 569/1024 [MB] (22 MBps) [2024-11-10T05:21:31.295Z] Copying: 589/1024 [MB] (19 MBps) [2024-11-10T05:21:32.681Z] Copying: 610/1024 [MB] (21 MBps) [2024-11-10T05:21:33.627Z] Copying: 634/1024 [MB] (23 MBps) [2024-11-10T05:21:34.573Z] Copying: 656/1024 [MB] (22 MBps) [2024-11-10T05:21:35.517Z] Copying: 676/1024 [MB] (20 MBps) [2024-11-10T05:21:36.461Z] Copying: 691/1024 [MB] (14 MBps) [2024-11-10T05:21:37.403Z] Copying: 702/1024 [MB] (10 MBps) [2024-11-10T05:21:38.345Z] Copying: 713/1024 [MB] (11 MBps) [2024-11-10T05:21:39.290Z] Copying: 725/1024 [MB] (12 MBps) [2024-11-10T05:21:40.675Z] Copying: 739/1024 [MB] (13 MBps) [2024-11-10T05:21:41.618Z] Copying: 756/1024 [MB] (17 MBps) [2024-11-10T05:21:42.561Z] Copying: 778/1024 [MB] (22 MBps) [2024-11-10T05:21:43.505Z] Copying: 799/1024 [MB] (20 MBps) [2024-11-10T05:21:44.449Z] Copying: 817/1024 [MB] (17 MBps) [2024-11-10T05:21:45.453Z] Copying: 830/1024 [MB] (13 MBps) [2024-11-10T05:21:46.399Z] Copying: 844/1024 [MB] (14 MBps) [2024-11-10T05:21:47.345Z] Copying: 867/1024 [MB] (22 MBps) [2024-11-10T05:21:48.290Z] Copying: 886/1024 [MB] (19 MBps) [2024-11-10T05:21:49.689Z] Copying: 903/1024 [MB] (16 MBps) [2024-11-10T05:21:50.635Z] Copying: 923/1024 [MB] (20 MBps) [2024-11-10T05:21:51.580Z] Copying: 939/1024 [MB] (15 MBps) [2024-11-10T05:21:52.524Z] Copying: 960/1024 [MB] (21 MBps) [2024-11-10T05:21:53.468Z] Copying: 976/1024 [MB] (16 MBps) [2024-11-10T05:21:54.411Z] Copying: 990/1024 [MB] (13 MBps) [2024-11-10T05:21:55.355Z] Copying: 1000/1024 [MB] (10 MBps) [2024-11-10T05:21:56.302Z] Copying: 1010/1024 [MB] (10 MBps) [2024-11-10T05:21:56.563Z] Copying: 1021/1024 [MB] (10 MBps) [2024-11-10T05:21:57.138Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-11-10 05:21:56.892548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.902 [2024-11-10 05:21:56.892948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:03.903 [2024-11-10 05:21:56.892978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:03.903 [2024-11-10 05:21:56.893012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.903 [2024-11-10 05:21:56.893060] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:03.903 [2024-11-10 05:21:56.893876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.903 [2024-11-10 05:21:56.893920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:03.903 [2024-11-10 05:21:56.893937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.798 ms 00:20:03.903 [2024-11-10 05:21:56.893948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.903 [2024-11-10 05:21:56.894217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.903 [2024-11-10 05:21:56.894231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:03.903 [2024-11-10 05:21:56.894240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.241 ms 00:20:03.903 [2024-11-10 05:21:56.894249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.903 [2024-11-10 05:21:56.898403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.903 [2024-11-10 05:21:56.898454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:03.903 [2024-11-10 05:21:56.898464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.135 ms 00:20:03.903 [2024-11-10 05:21:56.898478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.903 [2024-11-10 05:21:56.905010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.903 [2024-11-10 05:21:56.905060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:03.903 [2024-11-10 05:21:56.905072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.504 ms 00:20:03.903 [2024-11-10 05:21:56.905080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.903 [2024-11-10 05:21:56.908036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.903 [2024-11-10 05:21:56.908231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:03.903 [2024-11-10 05:21:56.908251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.874 ms 00:20:03.903 [2024-11-10 05:21:56.908261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.903 [2024-11-10 05:21:56.913730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.903 [2024-11-10 05:21:56.913805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:03.903 [2024-11-10 05:21:56.913820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.345 ms 00:20:03.903 [2024-11-10 05:21:56.913829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.903 [2024-11-10 05:21:56.913942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.903 [2024-11-10 05:21:56.913953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:03.903 [2024-11-10 05:21:56.913963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:20:03.903 [2024-11-10 05:21:56.913971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.903 [2024-11-10 05:21:56.917186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.903 [2024-11-10 05:21:56.917241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:03.903 [2024-11-10 05:21:56.917252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.196 ms 00:20:03.903 [2024-11-10 05:21:56.917259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.903 [2024-11-10 05:21:56.920316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.903 [2024-11-10 05:21:56.920367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:03.903 [2024-11-10 05:21:56.920378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.991 ms 00:20:03.903 [2024-11-10 05:21:56.920386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.903 [2024-11-10 05:21:56.922611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.903 [2024-11-10 05:21:56.922662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:03.903 [2024-11-10 05:21:56.922672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.179 ms 00:20:03.903 [2024-11-10 05:21:56.922679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.903 [2024-11-10 05:21:56.925701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.903 [2024-11-10 05:21:56.925773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:03.903 [2024-11-10 05:21:56.925783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.945 ms 00:20:03.903 [2024-11-10 05:21:56.925790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.903 [2024-11-10 05:21:56.925835] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:03.903 [2024-11-10 05:21:56.925862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:03.903 [2024-11-10 05:21:56.925879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:03.903 [2024-11-10 05:21:56.925888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:03.903 [2024-11-10 05:21:56.925898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:03.903 [2024-11-10 05:21:56.925907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:03.903 [2024-11-10 05:21:56.925915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:03.903 [2024-11-10 05:21:56.925924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:03.903 [2024-11-10 05:21:56.925933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:03.903 [2024-11-10 05:21:56.925941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:03.903 [2024-11-10 05:21:56.925949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:03.903 [2024-11-10 05:21:56.925957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:03.903 [2024-11-10 05:21:56.925965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:03.903 [2024-11-10 05:21:56.925973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:03.903 [2024-11-10 05:21:56.925981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:03.903 [2024-11-10 05:21:56.926011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:03.903 [2024-11-10 05:21:56.926019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:03.903 [2024-11-10 05:21:56.926028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:03.903 [2024-11-10 05:21:56.926036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:03.903 [2024-11-10 05:21:56.926044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:03.903 [2024-11-10 05:21:56.926052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:03.903 [2024-11-10 05:21:56.926061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:03.903 [2024-11-10 05:21:56.926068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:03.903 [2024-11-10 05:21:56.926077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:03.903 [2024-11-10 05:21:56.926085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:03.903 [2024-11-10 05:21:56.926093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:03.903 [2024-11-10 05:21:56.926101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:03.903 [2024-11-10 05:21:56.926110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:03.903 [2024-11-10 05:21:56.926118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:03.903 [2024-11-10 05:21:56.926126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:03.903 [2024-11-10 05:21:56.926135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:03.903 [2024-11-10 05:21:56.926143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:03.903 [2024-11-10 05:21:56.926151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:03.903 [2024-11-10 05:21:56.926162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:03.903 [2024-11-10 05:21:56.926171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:03.903 [2024-11-10 05:21:56.926178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:03.903 [2024-11-10 05:21:56.926186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:03.903 [2024-11-10 05:21:56.926194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:03.903 [2024-11-10 05:21:56.926202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:03.903 [2024-11-10 05:21:56.926211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:03.903 [2024-11-10 05:21:56.926220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:03.903 [2024-11-10 05:21:56.926227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:03.904 [2024-11-10 05:21:56.926235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:03.904 [2024-11-10 05:21:56.926243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:03.904 [2024-11-10 05:21:56.926252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:03.904 [2024-11-10 05:21:56.926259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:03.904 [2024-11-10 05:21:56.926267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:03.904 [2024-11-10 05:21:56.926274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:03.904 [2024-11-10 05:21:56.926282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:03.904 [2024-11-10 05:21:56.926290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:03.904 [2024-11-10 05:21:56.926298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:03.904 [2024-11-10 05:21:56.926306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:03.904 [2024-11-10 05:21:56.926313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:03.904 [2024-11-10 05:21:56.926330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:03.904 [2024-11-10 05:21:56.926338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:03.904 [2024-11-10 05:21:56.926346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:03.904 [2024-11-10 05:21:56.926354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:03.904 [2024-11-10 05:21:56.926361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:03.904 [2024-11-10 05:21:56.926369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:03.904 [2024-11-10 05:21:56.926376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:03.904 [2024-11-10 05:21:56.926384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:03.904 [2024-11-10 05:21:56.926392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:03.904 [2024-11-10 05:21:56.926400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:03.904 [2024-11-10 05:21:56.926408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:03.904 [2024-11-10 05:21:56.926415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:03.904 [2024-11-10 05:21:56.926425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:03.904 [2024-11-10 05:21:56.926434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:03.904 [2024-11-10 05:21:56.926443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:03.904 [2024-11-10 05:21:56.926451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:03.904 [2024-11-10 05:21:56.926460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:03.904 [2024-11-10 05:21:56.926469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:03.904 [2024-11-10 05:21:56.926477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:03.904 [2024-11-10 05:21:56.926485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:03.904 [2024-11-10 05:21:56.926494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:03.904 [2024-11-10 05:21:56.926501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:03.904 [2024-11-10 05:21:56.926510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:03.904 [2024-11-10 05:21:56.926517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:03.904 [2024-11-10 05:21:56.926525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:03.904 [2024-11-10 05:21:56.926533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:03.904 [2024-11-10 05:21:56.926540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:03.904 [2024-11-10 05:21:56.926549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:03.904 [2024-11-10 05:21:56.926556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:03.904 [2024-11-10 05:21:56.926563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:03.904 [2024-11-10 05:21:56.926571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:03.904 [2024-11-10 05:21:56.926579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:03.904 [2024-11-10 05:21:56.926586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:03.904 [2024-11-10 05:21:56.926594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:03.904 [2024-11-10 05:21:56.926601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:03.904 [2024-11-10 05:21:56.926609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:03.904 [2024-11-10 05:21:56.926617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:03.904 [2024-11-10 05:21:56.926625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:03.904 [2024-11-10 05:21:56.926632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:03.904 [2024-11-10 05:21:56.926639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:03.904 [2024-11-10 05:21:56.926647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:03.904 [2024-11-10 05:21:56.926655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:03.904 [2024-11-10 05:21:56.926663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:03.904 [2024-11-10 05:21:56.926671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:03.904 [2024-11-10 05:21:56.926680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:03.904 [2024-11-10 05:21:56.926688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:03.904 [2024-11-10 05:21:56.926695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:03.904 [2024-11-10 05:21:56.926703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:03.904 [2024-11-10 05:21:56.926719] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:03.904 [2024-11-10 05:21:56.926728] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 009e53e2-83fe-4f5e-b934-8f419491318d 00:20:03.904 [2024-11-10 05:21:56.926737] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:03.904 [2024-11-10 05:21:56.926744] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:03.904 [2024-11-10 05:21:56.926752] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:03.904 [2024-11-10 05:21:56.926760] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:03.904 [2024-11-10 05:21:56.926768] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:03.904 [2024-11-10 05:21:56.926784] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:03.904 [2024-11-10 05:21:56.926792] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:03.904 [2024-11-10 05:21:56.926799] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:03.904 [2024-11-10 05:21:56.926806] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:03.904 [2024-11-10 05:21:56.926813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.904 [2024-11-10 05:21:56.926822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:03.904 [2024-11-10 05:21:56.926838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.980 ms 00:20:03.904 [2024-11-10 05:21:56.926846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.904 [2024-11-10 05:21:56.929364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.904 [2024-11-10 05:21:56.929398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:03.904 [2024-11-10 05:21:56.929410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.477 ms 00:20:03.904 [2024-11-10 05:21:56.929419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.904 [2024-11-10 05:21:56.929557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.904 [2024-11-10 05:21:56.929570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:03.904 [2024-11-10 05:21:56.929580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:20:03.904 [2024-11-10 05:21:56.929587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.904 [2024-11-10 05:21:56.937203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.905 [2024-11-10 05:21:56.937251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:03.905 [2024-11-10 05:21:56.937262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.905 [2024-11-10 05:21:56.937270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.905 [2024-11-10 05:21:56.937332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.905 [2024-11-10 05:21:56.937347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:03.905 [2024-11-10 05:21:56.937356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.905 [2024-11-10 05:21:56.937363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.905 [2024-11-10 05:21:56.937425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.905 [2024-11-10 05:21:56.937435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:03.905 [2024-11-10 05:21:56.937444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.905 [2024-11-10 05:21:56.937452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.905 [2024-11-10 05:21:56.937467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.905 [2024-11-10 05:21:56.937482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:03.905 [2024-11-10 05:21:56.937493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.905 [2024-11-10 05:21:56.937502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.905 [2024-11-10 05:21:56.950812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.905 [2024-11-10 05:21:56.950868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:03.905 [2024-11-10 05:21:56.950880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.905 [2024-11-10 05:21:56.950888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.905 [2024-11-10 05:21:56.961108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.905 [2024-11-10 05:21:56.961165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:03.905 [2024-11-10 05:21:56.961179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.905 [2024-11-10 05:21:56.961188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.905 [2024-11-10 05:21:56.961235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.905 [2024-11-10 05:21:56.961245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:03.905 [2024-11-10 05:21:56.961260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.905 [2024-11-10 05:21:56.961269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.905 [2024-11-10 05:21:56.961304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.905 [2024-11-10 05:21:56.961313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:03.905 [2024-11-10 05:21:56.961321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.905 [2024-11-10 05:21:56.961331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.905 [2024-11-10 05:21:56.961399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.905 [2024-11-10 05:21:56.961409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:03.905 [2024-11-10 05:21:56.961418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.905 [2024-11-10 05:21:56.961426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.905 [2024-11-10 05:21:56.961453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.905 [2024-11-10 05:21:56.961463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:03.905 [2024-11-10 05:21:56.961471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.905 [2024-11-10 05:21:56.961479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.905 [2024-11-10 05:21:56.961521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.905 [2024-11-10 05:21:56.961530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:03.905 [2024-11-10 05:21:56.961538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.905 [2024-11-10 05:21:56.961546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.905 [2024-11-10 05:21:56.961596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.905 [2024-11-10 05:21:56.961608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:03.905 [2024-11-10 05:21:56.961617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.905 [2024-11-10 05:21:56.961627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.905 [2024-11-10 05:21:56.961753] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 69.177 ms, result 0 00:20:04.167 00:20:04.167 00:20:04.167 05:21:57 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:06.759 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:20:06.759 05:21:59 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:20:06.759 [2024-11-10 05:21:59.442189] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:20:06.759 [2024-11-10 05:21:59.442294] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87584 ] 00:20:06.759 [2024-11-10 05:21:59.584833] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:06.759 [2024-11-10 05:21:59.623722] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:06.759 [2024-11-10 05:21:59.721396] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:06.759 [2024-11-10 05:21:59.721466] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:06.760 [2024-11-10 05:21:59.879739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.760 [2024-11-10 05:21:59.879807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:06.760 [2024-11-10 05:21:59.879827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:06.760 [2024-11-10 05:21:59.879836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.760 [2024-11-10 05:21:59.879897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.760 [2024-11-10 05:21:59.879908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:06.760 [2024-11-10 05:21:59.879917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:20:06.760 [2024-11-10 05:21:59.879926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.760 [2024-11-10 05:21:59.879950] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:06.760 [2024-11-10 05:21:59.880281] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:06.760 [2024-11-10 05:21:59.880303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.760 [2024-11-10 05:21:59.880312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:06.760 [2024-11-10 05:21:59.880325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.361 ms 00:20:06.760 [2024-11-10 05:21:59.880336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.760 [2024-11-10 05:21:59.882089] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:06.760 [2024-11-10 05:21:59.886308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.760 [2024-11-10 05:21:59.886364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:06.760 [2024-11-10 05:21:59.886378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.221 ms 00:20:06.760 [2024-11-10 05:21:59.886386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.760 [2024-11-10 05:21:59.886476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.760 [2024-11-10 05:21:59.886485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:06.760 [2024-11-10 05:21:59.886497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:20:06.760 [2024-11-10 05:21:59.886510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.760 [2024-11-10 05:21:59.894935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.760 [2024-11-10 05:21:59.895027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:06.760 [2024-11-10 05:21:59.895042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.381 ms 00:20:06.760 [2024-11-10 05:21:59.895054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.760 [2024-11-10 05:21:59.895161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.760 [2024-11-10 05:21:59.895171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:06.760 [2024-11-10 05:21:59.895180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:20:06.760 [2024-11-10 05:21:59.895189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.760 [2024-11-10 05:21:59.895253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.760 [2024-11-10 05:21:59.895267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:06.760 [2024-11-10 05:21:59.895276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:06.760 [2024-11-10 05:21:59.895283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.760 [2024-11-10 05:21:59.895310] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:06.760 [2024-11-10 05:21:59.897305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.760 [2024-11-10 05:21:59.897485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:06.760 [2024-11-10 05:21:59.897504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.004 ms 00:20:06.760 [2024-11-10 05:21:59.897512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.760 [2024-11-10 05:21:59.897550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.760 [2024-11-10 05:21:59.897558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:06.760 [2024-11-10 05:21:59.897567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:06.760 [2024-11-10 05:21:59.897574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.760 [2024-11-10 05:21:59.897596] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:06.760 [2024-11-10 05:21:59.897627] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:06.760 [2024-11-10 05:21:59.897664] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:06.760 [2024-11-10 05:21:59.897681] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:06.760 [2024-11-10 05:21:59.897786] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:06.760 [2024-11-10 05:21:59.897797] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:06.760 [2024-11-10 05:21:59.897813] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:06.760 [2024-11-10 05:21:59.897824] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:06.760 [2024-11-10 05:21:59.897836] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:06.760 [2024-11-10 05:21:59.897845] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:06.760 [2024-11-10 05:21:59.897854] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:06.760 [2024-11-10 05:21:59.897862] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:06.760 [2024-11-10 05:21:59.897874] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:06.760 [2024-11-10 05:21:59.897882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.760 [2024-11-10 05:21:59.897890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:06.760 [2024-11-10 05:21:59.897899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.288 ms 00:20:06.760 [2024-11-10 05:21:59.897907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.760 [2024-11-10 05:21:59.898014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.760 [2024-11-10 05:21:59.898031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:06.760 [2024-11-10 05:21:59.898043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:06.760 [2024-11-10 05:21:59.898052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.760 [2024-11-10 05:21:59.898156] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:06.760 [2024-11-10 05:21:59.898168] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:06.760 [2024-11-10 05:21:59.898177] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:06.760 [2024-11-10 05:21:59.898194] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:06.760 [2024-11-10 05:21:59.898203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:06.760 [2024-11-10 05:21:59.898212] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:06.760 [2024-11-10 05:21:59.898220] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:06.760 [2024-11-10 05:21:59.898228] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:06.760 [2024-11-10 05:21:59.898237] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:06.760 [2024-11-10 05:21:59.898245] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:06.760 [2024-11-10 05:21:59.898253] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:06.760 [2024-11-10 05:21:59.898262] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:06.760 [2024-11-10 05:21:59.898272] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:06.760 [2024-11-10 05:21:59.898280] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:06.760 [2024-11-10 05:21:59.898288] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:06.760 [2024-11-10 05:21:59.898296] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:06.760 [2024-11-10 05:21:59.898304] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:06.760 [2024-11-10 05:21:59.898313] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:06.760 [2024-11-10 05:21:59.898322] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:06.760 [2024-11-10 05:21:59.898330] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:06.760 [2024-11-10 05:21:59.898338] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:06.760 [2024-11-10 05:21:59.898346] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:06.760 [2024-11-10 05:21:59.898354] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:06.760 [2024-11-10 05:21:59.898363] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:06.760 [2024-11-10 05:21:59.898371] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:06.760 [2024-11-10 05:21:59.898379] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:06.760 [2024-11-10 05:21:59.898387] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:06.760 [2024-11-10 05:21:59.898395] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:06.760 [2024-11-10 05:21:59.898410] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:06.760 [2024-11-10 05:21:59.898419] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:06.760 [2024-11-10 05:21:59.898427] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:06.760 [2024-11-10 05:21:59.898433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:06.760 [2024-11-10 05:21:59.898440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:06.760 [2024-11-10 05:21:59.898448] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:06.760 [2024-11-10 05:21:59.898455] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:06.760 [2024-11-10 05:21:59.898462] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:06.760 [2024-11-10 05:21:59.898468] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:06.760 [2024-11-10 05:21:59.898475] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:06.760 [2024-11-10 05:21:59.898482] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:06.760 [2024-11-10 05:21:59.898488] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:06.760 [2024-11-10 05:21:59.898495] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:06.760 [2024-11-10 05:21:59.898502] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:06.760 [2024-11-10 05:21:59.898509] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:06.760 [2024-11-10 05:21:59.898516] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:06.760 [2024-11-10 05:21:59.898529] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:06.760 [2024-11-10 05:21:59.898537] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:06.760 [2024-11-10 05:21:59.898547] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:06.760 [2024-11-10 05:21:59.898555] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:06.760 [2024-11-10 05:21:59.898562] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:06.760 [2024-11-10 05:21:59.898572] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:06.760 [2024-11-10 05:21:59.898579] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:06.760 [2024-11-10 05:21:59.898586] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:06.760 [2024-11-10 05:21:59.898593] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:06.760 [2024-11-10 05:21:59.898601] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:06.760 [2024-11-10 05:21:59.898610] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:06.760 [2024-11-10 05:21:59.898622] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:06.760 [2024-11-10 05:21:59.898630] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:06.760 [2024-11-10 05:21:59.898638] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:06.760 [2024-11-10 05:21:59.898645] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:06.760 [2024-11-10 05:21:59.898652] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:06.760 [2024-11-10 05:21:59.898661] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:06.760 [2024-11-10 05:21:59.898669] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:06.760 [2024-11-10 05:21:59.898677] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:06.761 [2024-11-10 05:21:59.898684] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:06.761 [2024-11-10 05:21:59.898691] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:06.761 [2024-11-10 05:21:59.898698] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:06.761 [2024-11-10 05:21:59.898706] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:06.761 [2024-11-10 05:21:59.898713] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:06.761 [2024-11-10 05:21:59.898721] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:06.761 [2024-11-10 05:21:59.898729] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:06.761 [2024-11-10 05:21:59.898737] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:06.761 [2024-11-10 05:21:59.898746] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:06.761 [2024-11-10 05:21:59.898754] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:06.761 [2024-11-10 05:21:59.898762] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:06.761 [2024-11-10 05:21:59.898770] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:06.761 [2024-11-10 05:21:59.898777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.761 [2024-11-10 05:21:59.898787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:06.761 [2024-11-10 05:21:59.898796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.689 ms 00:20:06.761 [2024-11-10 05:21:59.898803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.761 [2024-11-10 05:21:59.921919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.761 [2024-11-10 05:21:59.922025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:06.761 [2024-11-10 05:21:59.922049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.071 ms 00:20:06.761 [2024-11-10 05:21:59.922063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.761 [2024-11-10 05:21:59.922192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.761 [2024-11-10 05:21:59.922207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:06.761 [2024-11-10 05:21:59.922221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:20:06.761 [2024-11-10 05:21:59.922233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.761 [2024-11-10 05:21:59.933629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.761 [2024-11-10 05:21:59.933678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:06.761 [2024-11-10 05:21:59.933689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.312 ms 00:20:06.761 [2024-11-10 05:21:59.933697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.761 [2024-11-10 05:21:59.933732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.761 [2024-11-10 05:21:59.933741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:06.761 [2024-11-10 05:21:59.933750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:06.761 [2024-11-10 05:21:59.933757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.761 [2024-11-10 05:21:59.934263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.761 [2024-11-10 05:21:59.934302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:06.761 [2024-11-10 05:21:59.934313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.454 ms 00:20:06.761 [2024-11-10 05:21:59.934322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.761 [2024-11-10 05:21:59.934460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.761 [2024-11-10 05:21:59.934475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:06.761 [2024-11-10 05:21:59.934485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:20:06.761 [2024-11-10 05:21:59.934493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.761 [2024-11-10 05:21:59.940825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.761 [2024-11-10 05:21:59.941026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:06.761 [2024-11-10 05:21:59.941051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.308 ms 00:20:06.761 [2024-11-10 05:21:59.941060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.761 [2024-11-10 05:21:59.944667] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:06.761 [2024-11-10 05:21:59.944824] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:06.761 [2024-11-10 05:21:59.944847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.761 [2024-11-10 05:21:59.944856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:06.761 [2024-11-10 05:21:59.944864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.682 ms 00:20:06.761 [2024-11-10 05:21:59.944872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.761 [2024-11-10 05:21:59.960451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.761 [2024-11-10 05:21:59.960503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:06.761 [2024-11-10 05:21:59.960524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.539 ms 00:20:06.761 [2024-11-10 05:21:59.960532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.761 [2024-11-10 05:21:59.963342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.761 [2024-11-10 05:21:59.963500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:06.761 [2024-11-10 05:21:59.963517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.760 ms 00:20:06.761 [2024-11-10 05:21:59.963524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.761 [2024-11-10 05:21:59.966232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.761 [2024-11-10 05:21:59.966275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:06.761 [2024-11-10 05:21:59.966286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.673 ms 00:20:06.761 [2024-11-10 05:21:59.966293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.761 [2024-11-10 05:21:59.966646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.761 [2024-11-10 05:21:59.966662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:06.761 [2024-11-10 05:21:59.966671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:20:06.761 [2024-11-10 05:21:59.966679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.761 [2024-11-10 05:21:59.989608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.761 [2024-11-10 05:21:59.989828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:06.761 [2024-11-10 05:21:59.989849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.910 ms 00:20:06.761 [2024-11-10 05:21:59.989865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.023 [2024-11-10 05:21:59.997916] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:07.023 [2024-11-10 05:22:00.000905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.023 [2024-11-10 05:22:00.001085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:07.023 [2024-11-10 05:22:00.001112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.999 ms 00:20:07.023 [2024-11-10 05:22:00.001121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.023 [2024-11-10 05:22:00.001203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.023 [2024-11-10 05:22:00.001214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:07.023 [2024-11-10 05:22:00.001224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:07.023 [2024-11-10 05:22:00.001231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.023 [2024-11-10 05:22:00.001297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.023 [2024-11-10 05:22:00.001308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:07.023 [2024-11-10 05:22:00.001317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:20:07.023 [2024-11-10 05:22:00.001329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.023 [2024-11-10 05:22:00.001349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.023 [2024-11-10 05:22:00.001358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:07.023 [2024-11-10 05:22:00.001366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:07.023 [2024-11-10 05:22:00.001374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.023 [2024-11-10 05:22:00.001407] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:07.023 [2024-11-10 05:22:00.001424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.023 [2024-11-10 05:22:00.001432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:07.023 [2024-11-10 05:22:00.001440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:07.023 [2024-11-10 05:22:00.001448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.023 [2024-11-10 05:22:00.006961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.023 [2024-11-10 05:22:00.007042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:07.023 [2024-11-10 05:22:00.007054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.492 ms 00:20:07.023 [2024-11-10 05:22:00.007062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.023 [2024-11-10 05:22:00.007149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.024 [2024-11-10 05:22:00.007159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:07.024 [2024-11-10 05:22:00.007168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:20:07.024 [2024-11-10 05:22:00.007175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.024 [2024-11-10 05:22:00.008249] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 128.040 ms, result 0 00:20:07.967  [2024-11-10T05:22:02.147Z] Copying: 10164/1048576 [kB] (10164 kBps) [2024-11-10T05:22:03.093Z] Copying: 25/1024 [MB] (15 MBps) [2024-11-10T05:22:04.039Z] Copying: 42/1024 [MB] (17 MBps) [2024-11-10T05:22:05.428Z] Copying: 65/1024 [MB] (23 MBps) [2024-11-10T05:22:06.372Z] Copying: 88/1024 [MB] (22 MBps) [2024-11-10T05:22:07.317Z] Copying: 110/1024 [MB] (21 MBps) [2024-11-10T05:22:08.261Z] Copying: 132/1024 [MB] (22 MBps) [2024-11-10T05:22:09.206Z] Copying: 158/1024 [MB] (26 MBps) [2024-11-10T05:22:10.151Z] Copying: 175/1024 [MB] (16 MBps) [2024-11-10T05:22:11.096Z] Copying: 192/1024 [MB] (17 MBps) [2024-11-10T05:22:12.043Z] Copying: 204/1024 [MB] (11 MBps) [2024-11-10T05:22:13.449Z] Copying: 221/1024 [MB] (16 MBps) [2024-11-10T05:22:14.050Z] Copying: 241/1024 [MB] (19 MBps) [2024-11-10T05:22:15.439Z] Copying: 251/1024 [MB] (10 MBps) [2024-11-10T05:22:16.384Z] Copying: 262/1024 [MB] (10 MBps) [2024-11-10T05:22:17.329Z] Copying: 272/1024 [MB] (10 MBps) [2024-11-10T05:22:18.274Z] Copying: 282/1024 [MB] (10 MBps) [2024-11-10T05:22:19.219Z] Copying: 293/1024 [MB] (10 MBps) [2024-11-10T05:22:20.161Z] Copying: 304/1024 [MB] (10 MBps) [2024-11-10T05:22:21.102Z] Copying: 314/1024 [MB] (10 MBps) [2024-11-10T05:22:22.043Z] Copying: 324/1024 [MB] (10 MBps) [2024-11-10T05:22:23.430Z] Copying: 345/1024 [MB] (20 MBps) [2024-11-10T05:22:24.375Z] Copying: 360/1024 [MB] (15 MBps) [2024-11-10T05:22:25.320Z] Copying: 376/1024 [MB] (15 MBps) [2024-11-10T05:22:26.265Z] Copying: 393/1024 [MB] (16 MBps) [2024-11-10T05:22:27.208Z] Copying: 408/1024 [MB] (15 MBps) [2024-11-10T05:22:28.163Z] Copying: 418/1024 [MB] (10 MBps) [2024-11-10T05:22:29.106Z] Copying: 448/1024 [MB] (30 MBps) [2024-11-10T05:22:30.049Z] Copying: 468/1024 [MB] (19 MBps) [2024-11-10T05:22:31.434Z] Copying: 485/1024 [MB] (16 MBps) [2024-11-10T05:22:32.377Z] Copying: 504/1024 [MB] (18 MBps) [2024-11-10T05:22:33.318Z] Copying: 518/1024 [MB] (13 MBps) [2024-11-10T05:22:34.261Z] Copying: 535/1024 [MB] (17 MBps) [2024-11-10T05:22:35.204Z] Copying: 551/1024 [MB] (16 MBps) [2024-11-10T05:22:36.146Z] Copying: 577/1024 [MB] (26 MBps) [2024-11-10T05:22:37.090Z] Copying: 605/1024 [MB] (27 MBps) [2024-11-10T05:22:38.035Z] Copying: 624/1024 [MB] (19 MBps) [2024-11-10T05:22:39.423Z] Copying: 643/1024 [MB] (19 MBps) [2024-11-10T05:22:40.366Z] Copying: 665/1024 [MB] (21 MBps) [2024-11-10T05:22:41.312Z] Copying: 687/1024 [MB] (22 MBps) [2024-11-10T05:22:42.293Z] Copying: 699/1024 [MB] (11 MBps) [2024-11-10T05:22:43.236Z] Copying: 714/1024 [MB] (15 MBps) [2024-11-10T05:22:44.181Z] Copying: 730/1024 [MB] (15 MBps) [2024-11-10T05:22:45.124Z] Copying: 747/1024 [MB] (16 MBps) [2024-11-10T05:22:46.068Z] Copying: 770/1024 [MB] (23 MBps) [2024-11-10T05:22:47.451Z] Copying: 781/1024 [MB] (10 MBps) [2024-11-10T05:22:48.024Z] Copying: 792/1024 [MB] (10 MBps) [2024-11-10T05:22:49.412Z] Copying: 802/1024 [MB] (10 MBps) [2024-11-10T05:22:50.353Z] Copying: 814/1024 [MB] (12 MBps) [2024-11-10T05:22:51.296Z] Copying: 849/1024 [MB] (34 MBps) [2024-11-10T05:22:52.242Z] Copying: 861/1024 [MB] (11 MBps) [2024-11-10T05:22:53.186Z] Copying: 874/1024 [MB] (13 MBps) [2024-11-10T05:22:54.130Z] Copying: 885/1024 [MB] (10 MBps) [2024-11-10T05:22:55.073Z] Copying: 895/1024 [MB] (10 MBps) [2024-11-10T05:22:56.457Z] Copying: 906/1024 [MB] (10 MBps) [2024-11-10T05:22:57.033Z] Copying: 918/1024 [MB] (11 MBps) [2024-11-10T05:22:58.419Z] Copying: 928/1024 [MB] (10 MBps) [2024-11-10T05:22:59.361Z] Copying: 939/1024 [MB] (10 MBps) [2024-11-10T05:23:00.305Z] Copying: 952/1024 [MB] (13 MBps) [2024-11-10T05:23:01.250Z] Copying: 970/1024 [MB] (17 MBps) [2024-11-10T05:23:02.194Z] Copying: 994/1024 [MB] (24 MBps) [2024-11-10T05:23:03.137Z] Copying: 1008/1024 [MB] (13 MBps) [2024-11-10T05:23:04.080Z] Copying: 1023/1024 [MB] (14 MBps) [2024-11-10T05:23:04.080Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-11-10 05:23:03.765694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.844 [2024-11-10 05:23:03.766214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:10.844 [2024-11-10 05:23:03.766803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:10.844 [2024-11-10 05:23:03.766871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.844 [2024-11-10 05:23:03.770111] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:10.844 [2024-11-10 05:23:03.771792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.844 [2024-11-10 05:23:03.771851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:10.844 [2024-11-10 05:23:03.771865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.619 ms 00:21:10.844 [2024-11-10 05:23:03.771875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.844 [2024-11-10 05:23:03.786206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.844 [2024-11-10 05:23:03.786270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:10.844 [2024-11-10 05:23:03.786287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.625 ms 00:21:10.844 [2024-11-10 05:23:03.786296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.844 [2024-11-10 05:23:03.812634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.844 [2024-11-10 05:23:03.812720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:10.844 [2024-11-10 05:23:03.812734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.317 ms 00:21:10.844 [2024-11-10 05:23:03.812742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.844 [2024-11-10 05:23:03.818962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.844 [2024-11-10 05:23:03.819032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:10.844 [2024-11-10 05:23:03.819045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.177 ms 00:21:10.844 [2024-11-10 05:23:03.819054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.844 [2024-11-10 05:23:03.821663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.844 [2024-11-10 05:23:03.821719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:10.844 [2024-11-10 05:23:03.821731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.543 ms 00:21:10.844 [2024-11-10 05:23:03.821739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.844 [2024-11-10 05:23:03.826741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.844 [2024-11-10 05:23:03.826798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:10.844 [2024-11-10 05:23:03.826809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.955 ms 00:21:10.844 [2024-11-10 05:23:03.826818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.844 [2024-11-10 05:23:04.066471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.844 [2024-11-10 05:23:04.066531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:10.844 [2024-11-10 05:23:04.066546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 239.592 ms 00:21:10.844 [2024-11-10 05:23:04.066556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.844 [2024-11-10 05:23:04.069369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.844 [2024-11-10 05:23:04.069423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:10.844 [2024-11-10 05:23:04.069435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.794 ms 00:21:10.844 [2024-11-10 05:23:04.069443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.844 [2024-11-10 05:23:04.071618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.844 [2024-11-10 05:23:04.071671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:10.844 [2024-11-10 05:23:04.071682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.116 ms 00:21:10.844 [2024-11-10 05:23:04.071692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.844 [2024-11-10 05:23:04.073385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.844 [2024-11-10 05:23:04.073436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:10.844 [2024-11-10 05:23:04.073446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.649 ms 00:21:10.844 [2024-11-10 05:23:04.073454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.844 [2024-11-10 05:23:04.075211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.844 [2024-11-10 05:23:04.075261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:10.844 [2024-11-10 05:23:04.075272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.683 ms 00:21:10.844 [2024-11-10 05:23:04.075280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.844 [2024-11-10 05:23:04.075322] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:10.844 [2024-11-10 05:23:04.075339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 99072 / 261120 wr_cnt: 1 state: open 00:21:10.844 [2024-11-10 05:23:04.075351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:10.844 [2024-11-10 05:23:04.075359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:10.844 [2024-11-10 05:23:04.075369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:10.844 [2024-11-10 05:23:04.075389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:10.844 [2024-11-10 05:23:04.075398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:10.844 [2024-11-10 05:23:04.075407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:10.844 [2024-11-10 05:23:04.075416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:10.844 [2024-11-10 05:23:04.075425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:10.844 [2024-11-10 05:23:04.075433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:10.844 [2024-11-10 05:23:04.075441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:10.844 [2024-11-10 05:23:04.075450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:10.844 [2024-11-10 05:23:04.075459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:10.844 [2024-11-10 05:23:04.075467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:10.844 [2024-11-10 05:23:04.075475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:10.844 [2024-11-10 05:23:04.075484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:10.844 [2024-11-10 05:23:04.075492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:10.844 [2024-11-10 05:23:04.075501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:10.844 [2024-11-10 05:23:04.075508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:10.844 [2024-11-10 05:23:04.075517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:10.844 [2024-11-10 05:23:04.075525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:10.844 [2024-11-10 05:23:04.075532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:10.844 [2024-11-10 05:23:04.075540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:10.844 [2024-11-10 05:23:04.075548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:10.844 [2024-11-10 05:23:04.075556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:10.844 [2024-11-10 05:23:04.075563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:10.844 [2024-11-10 05:23:04.075572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:10.844 [2024-11-10 05:23:04.075580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:10.844 [2024-11-10 05:23:04.075588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:10.844 [2024-11-10 05:23:04.075595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:10.845 [2024-11-10 05:23:04.075602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:10.845 [2024-11-10 05:23:04.075610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:10.845 [2024-11-10 05:23:04.075618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:10.845 [2024-11-10 05:23:04.075625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:10.845 [2024-11-10 05:23:04.075633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:10.845 [2024-11-10 05:23:04.075641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:10.845 [2024-11-10 05:23:04.075649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:10.845 [2024-11-10 05:23:04.075657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:10.845 [2024-11-10 05:23:04.075664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:10.845 [2024-11-10 05:23:04.075672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:10.845 [2024-11-10 05:23:04.075681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:10.845 [2024-11-10 05:23:04.075689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:10.845 [2024-11-10 05:23:04.075697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:10.845 [2024-11-10 05:23:04.075704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:10.845 [2024-11-10 05:23:04.075712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:10.845 [2024-11-10 05:23:04.075720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:10.845 [2024-11-10 05:23:04.075728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:10.845 [2024-11-10 05:23:04.075736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:10.845 [2024-11-10 05:23:04.075743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:10.845 [2024-11-10 05:23:04.075751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:10.845 [2024-11-10 05:23:04.075759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:10.845 [2024-11-10 05:23:04.075767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:10.845 [2024-11-10 05:23:04.075774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:10.845 [2024-11-10 05:23:04.075782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:10.845 [2024-11-10 05:23:04.075789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:10.845 [2024-11-10 05:23:04.075798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:10.845 [2024-11-10 05:23:04.075805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:10.845 [2024-11-10 05:23:04.075812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:10.845 [2024-11-10 05:23:04.075821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:10.845 [2024-11-10 05:23:04.075828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:10.845 [2024-11-10 05:23:04.075836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:10.845 [2024-11-10 05:23:04.075843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:10.845 [2024-11-10 05:23:04.075851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:10.845 [2024-11-10 05:23:04.075858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:10.845 [2024-11-10 05:23:04.075865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:10.845 [2024-11-10 05:23:04.075873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:10.845 [2024-11-10 05:23:04.075881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:10.845 [2024-11-10 05:23:04.075888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:10.845 [2024-11-10 05:23:04.075895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:10.845 [2024-11-10 05:23:04.075903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:10.845 [2024-11-10 05:23:04.075911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:10.845 [2024-11-10 05:23:04.075918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:10.845 [2024-11-10 05:23:04.075927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:10.845 [2024-11-10 05:23:04.075935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:10.845 [2024-11-10 05:23:04.075943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:10.845 [2024-11-10 05:23:04.075951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:10.845 [2024-11-10 05:23:04.075959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:10.845 [2024-11-10 05:23:04.075966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:10.845 [2024-11-10 05:23:04.075975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:10.845 [2024-11-10 05:23:04.075983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:10.845 [2024-11-10 05:23:04.076006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:10.845 [2024-11-10 05:23:04.076014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:10.845 [2024-11-10 05:23:04.076024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:10.845 [2024-11-10 05:23:04.076033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:10.845 [2024-11-10 05:23:04.076056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:10.845 [2024-11-10 05:23:04.076065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:10.845 [2024-11-10 05:23:04.076074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:10.845 [2024-11-10 05:23:04.076083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:10.845 [2024-11-10 05:23:04.076091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:10.845 [2024-11-10 05:23:04.076110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:10.845 [2024-11-10 05:23:04.076118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:10.845 [2024-11-10 05:23:04.076126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:10.845 [2024-11-10 05:23:04.076134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:10.845 [2024-11-10 05:23:04.076143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:10.845 [2024-11-10 05:23:04.076151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:10.845 [2024-11-10 05:23:04.076159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:10.845 [2024-11-10 05:23:04.076167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:10.845 [2024-11-10 05:23:04.076175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:10.845 [2024-11-10 05:23:04.076182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:10.845 [2024-11-10 05:23:04.076190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:10.845 [2024-11-10 05:23:04.076206] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:10.845 [2024-11-10 05:23:04.076215] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 009e53e2-83fe-4f5e-b934-8f419491318d 00:21:10.845 [2024-11-10 05:23:04.076224] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 99072 00:21:10.845 [2024-11-10 05:23:04.076238] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 100032 00:21:10.845 [2024-11-10 05:23:04.076251] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 99072 00:21:10.845 [2024-11-10 05:23:04.076268] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0097 00:21:10.845 [2024-11-10 05:23:04.076280] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:10.845 [2024-11-10 05:23:04.076292] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:10.845 [2024-11-10 05:23:04.076304] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:10.845 [2024-11-10 05:23:04.076310] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:10.845 [2024-11-10 05:23:04.076317] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:10.845 [2024-11-10 05:23:04.076325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.845 [2024-11-10 05:23:04.076333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:10.845 [2024-11-10 05:23:04.076342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.005 ms 00:21:10.846 [2024-11-10 05:23:04.076349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.107 [2024-11-10 05:23:04.078681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.108 [2024-11-10 05:23:04.078722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:11.108 [2024-11-10 05:23:04.078734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.304 ms 00:21:11.108 [2024-11-10 05:23:04.078743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.108 [2024-11-10 05:23:04.078858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.108 [2024-11-10 05:23:04.078869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:11.108 [2024-11-10 05:23:04.078879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:21:11.108 [2024-11-10 05:23:04.078886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.108 [2024-11-10 05:23:04.085562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:11.108 [2024-11-10 05:23:04.085612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:11.108 [2024-11-10 05:23:04.085623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:11.108 [2024-11-10 05:23:04.085631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.108 [2024-11-10 05:23:04.085691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:11.108 [2024-11-10 05:23:04.085701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:11.108 [2024-11-10 05:23:04.085718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:11.108 [2024-11-10 05:23:04.085726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.108 [2024-11-10 05:23:04.085799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:11.108 [2024-11-10 05:23:04.085814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:11.108 [2024-11-10 05:23:04.085822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:11.108 [2024-11-10 05:23:04.085830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.108 [2024-11-10 05:23:04.085847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:11.108 [2024-11-10 05:23:04.085855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:11.108 [2024-11-10 05:23:04.085864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:11.108 [2024-11-10 05:23:04.085876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.108 [2024-11-10 05:23:04.098783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:11.108 [2024-11-10 05:23:04.098839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:11.108 [2024-11-10 05:23:04.098851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:11.108 [2024-11-10 05:23:04.098859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.108 [2024-11-10 05:23:04.108669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:11.108 [2024-11-10 05:23:04.108881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:11.108 [2024-11-10 05:23:04.108901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:11.108 [2024-11-10 05:23:04.108910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.108 [2024-11-10 05:23:04.108983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:11.108 [2024-11-10 05:23:04.109017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:11.108 [2024-11-10 05:23:04.109040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:11.108 [2024-11-10 05:23:04.109048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.108 [2024-11-10 05:23:04.109085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:11.108 [2024-11-10 05:23:04.109094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:11.108 [2024-11-10 05:23:04.109102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:11.108 [2024-11-10 05:23:04.109111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.108 [2024-11-10 05:23:04.109187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:11.108 [2024-11-10 05:23:04.109197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:11.108 [2024-11-10 05:23:04.109208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:11.108 [2024-11-10 05:23:04.109216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.108 [2024-11-10 05:23:04.109246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:11.108 [2024-11-10 05:23:04.109256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:11.108 [2024-11-10 05:23:04.109269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:11.108 [2024-11-10 05:23:04.109276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.108 [2024-11-10 05:23:04.109316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:11.108 [2024-11-10 05:23:04.109325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:11.108 [2024-11-10 05:23:04.109334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:11.108 [2024-11-10 05:23:04.109345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.108 [2024-11-10 05:23:04.109389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:11.108 [2024-11-10 05:23:04.109400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:11.108 [2024-11-10 05:23:04.109410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:11.108 [2024-11-10 05:23:04.109418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.108 [2024-11-10 05:23:04.109557] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 344.747 ms, result 0 00:21:11.682 00:21:11.682 00:21:11.682 05:23:04 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:21:11.682 [2024-11-10 05:23:04.800455] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:21:11.682 [2024-11-10 05:23:04.800858] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88263 ] 00:21:11.944 [2024-11-10 05:23:04.952573] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:11.944 [2024-11-10 05:23:05.005903] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:11.944 [2024-11-10 05:23:05.121590] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:11.944 [2024-11-10 05:23:05.121673] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:12.208 [2024-11-10 05:23:05.283551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.208 [2024-11-10 05:23:05.283789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:12.208 [2024-11-10 05:23:05.283827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:12.208 [2024-11-10 05:23:05.283840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.208 [2024-11-10 05:23:05.283930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.208 [2024-11-10 05:23:05.283947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:12.208 [2024-11-10 05:23:05.283961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:21:12.208 [2024-11-10 05:23:05.283973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.208 [2024-11-10 05:23:05.284039] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:12.208 [2024-11-10 05:23:05.284335] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:12.208 [2024-11-10 05:23:05.284353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.208 [2024-11-10 05:23:05.284363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:12.208 [2024-11-10 05:23:05.284376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.322 ms 00:21:12.208 [2024-11-10 05:23:05.284388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.208 [2024-11-10 05:23:05.286152] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:12.208 [2024-11-10 05:23:05.290127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.208 [2024-11-10 05:23:05.290336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:12.208 [2024-11-10 05:23:05.290357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.977 ms 00:21:12.208 [2024-11-10 05:23:05.290366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.208 [2024-11-10 05:23:05.290482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.208 [2024-11-10 05:23:05.290497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:12.208 [2024-11-10 05:23:05.290510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:21:12.208 [2024-11-10 05:23:05.290518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.208 [2024-11-10 05:23:05.299071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.208 [2024-11-10 05:23:05.299118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:12.208 [2024-11-10 05:23:05.299130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.495 ms 00:21:12.208 [2024-11-10 05:23:05.299153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.208 [2024-11-10 05:23:05.299253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.208 [2024-11-10 05:23:05.299263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:12.208 [2024-11-10 05:23:05.299276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:21:12.208 [2024-11-10 05:23:05.299284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.208 [2024-11-10 05:23:05.299350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.208 [2024-11-10 05:23:05.299361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:12.208 [2024-11-10 05:23:05.299370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:12.208 [2024-11-10 05:23:05.299378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.208 [2024-11-10 05:23:05.299405] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:12.208 [2024-11-10 05:23:05.301526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.208 [2024-11-10 05:23:05.301722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:12.208 [2024-11-10 05:23:05.301740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.130 ms 00:21:12.208 [2024-11-10 05:23:05.301749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.208 [2024-11-10 05:23:05.301794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.208 [2024-11-10 05:23:05.301803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:12.208 [2024-11-10 05:23:05.301816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:21:12.208 [2024-11-10 05:23:05.301824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.208 [2024-11-10 05:23:05.301848] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:12.208 [2024-11-10 05:23:05.301879] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:12.208 [2024-11-10 05:23:05.301917] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:12.208 [2024-11-10 05:23:05.301934] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:12.208 [2024-11-10 05:23:05.302070] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:12.208 [2024-11-10 05:23:05.302088] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:12.208 [2024-11-10 05:23:05.302100] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:12.208 [2024-11-10 05:23:05.302111] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:12.208 [2024-11-10 05:23:05.302123] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:12.208 [2024-11-10 05:23:05.302136] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:12.208 [2024-11-10 05:23:05.302145] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:12.208 [2024-11-10 05:23:05.302161] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:12.208 [2024-11-10 05:23:05.302170] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:12.208 [2024-11-10 05:23:05.302179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.208 [2024-11-10 05:23:05.302187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:12.208 [2024-11-10 05:23:05.302195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.334 ms 00:21:12.208 [2024-11-10 05:23:05.302203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.208 [2024-11-10 05:23:05.302289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.208 [2024-11-10 05:23:05.302301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:12.208 [2024-11-10 05:23:05.302309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:21:12.208 [2024-11-10 05:23:05.302318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.208 [2024-11-10 05:23:05.302419] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:12.208 [2024-11-10 05:23:05.302431] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:12.208 [2024-11-10 05:23:05.302440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:12.208 [2024-11-10 05:23:05.302458] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:12.208 [2024-11-10 05:23:05.302467] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:12.208 [2024-11-10 05:23:05.302475] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:12.208 [2024-11-10 05:23:05.302482] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:12.208 [2024-11-10 05:23:05.302490] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:12.208 [2024-11-10 05:23:05.302500] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:12.208 [2024-11-10 05:23:05.302509] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:12.208 [2024-11-10 05:23:05.302516] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:12.208 [2024-11-10 05:23:05.302524] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:12.208 [2024-11-10 05:23:05.302536] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:12.208 [2024-11-10 05:23:05.302545] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:12.209 [2024-11-10 05:23:05.302552] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:12.209 [2024-11-10 05:23:05.302560] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:12.209 [2024-11-10 05:23:05.302568] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:12.209 [2024-11-10 05:23:05.302580] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:12.209 [2024-11-10 05:23:05.302588] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:12.209 [2024-11-10 05:23:05.302597] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:12.209 [2024-11-10 05:23:05.302605] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:12.209 [2024-11-10 05:23:05.302613] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:12.209 [2024-11-10 05:23:05.302620] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:12.209 [2024-11-10 05:23:05.302629] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:12.209 [2024-11-10 05:23:05.302636] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:12.209 [2024-11-10 05:23:05.302644] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:12.209 [2024-11-10 05:23:05.302652] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:12.209 [2024-11-10 05:23:05.302660] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:12.209 [2024-11-10 05:23:05.302671] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:12.209 [2024-11-10 05:23:05.302678] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:12.209 [2024-11-10 05:23:05.302684] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:12.209 [2024-11-10 05:23:05.302691] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:12.209 [2024-11-10 05:23:05.302697] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:12.209 [2024-11-10 05:23:05.302704] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:12.209 [2024-11-10 05:23:05.302711] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:12.209 [2024-11-10 05:23:05.302717] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:12.209 [2024-11-10 05:23:05.302723] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:12.209 [2024-11-10 05:23:05.302730] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:12.209 [2024-11-10 05:23:05.302736] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:12.209 [2024-11-10 05:23:05.302743] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:12.209 [2024-11-10 05:23:05.302750] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:12.209 [2024-11-10 05:23:05.302756] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:12.209 [2024-11-10 05:23:05.302763] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:12.209 [2024-11-10 05:23:05.302912] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:12.209 [2024-11-10 05:23:05.302922] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:12.209 [2024-11-10 05:23:05.302929] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:12.209 [2024-11-10 05:23:05.302940] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:12.209 [2024-11-10 05:23:05.302948] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:12.209 [2024-11-10 05:23:05.302955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:12.209 [2024-11-10 05:23:05.302962] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:12.209 [2024-11-10 05:23:05.302969] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:12.209 [2024-11-10 05:23:05.302976] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:12.209 [2024-11-10 05:23:05.302983] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:12.209 [2024-11-10 05:23:05.303006] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:12.209 [2024-11-10 05:23:05.303017] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:12.209 [2024-11-10 05:23:05.303027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:12.209 [2024-11-10 05:23:05.303035] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:12.209 [2024-11-10 05:23:05.303043] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:12.209 [2024-11-10 05:23:05.303051] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:12.209 [2024-11-10 05:23:05.303058] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:12.209 [2024-11-10 05:23:05.303069] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:12.209 [2024-11-10 05:23:05.303077] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:12.209 [2024-11-10 05:23:05.303084] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:12.209 [2024-11-10 05:23:05.303092] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:12.209 [2024-11-10 05:23:05.303100] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:12.209 [2024-11-10 05:23:05.303108] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:12.209 [2024-11-10 05:23:05.303115] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:12.209 [2024-11-10 05:23:05.303123] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:12.209 [2024-11-10 05:23:05.303131] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:12.209 [2024-11-10 05:23:05.303139] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:12.209 [2024-11-10 05:23:05.303147] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:12.209 [2024-11-10 05:23:05.303155] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:12.209 [2024-11-10 05:23:05.303163] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:12.209 [2024-11-10 05:23:05.303170] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:12.209 [2024-11-10 05:23:05.303178] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:12.209 [2024-11-10 05:23:05.303187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.209 [2024-11-10 05:23:05.303198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:12.209 [2024-11-10 05:23:05.303206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.838 ms 00:21:12.209 [2024-11-10 05:23:05.303213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.209 [2024-11-10 05:23:05.326083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.209 [2024-11-10 05:23:05.326145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:12.209 [2024-11-10 05:23:05.326169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.804 ms 00:21:12.209 [2024-11-10 05:23:05.326179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.209 [2024-11-10 05:23:05.326284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.209 [2024-11-10 05:23:05.326294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:12.209 [2024-11-10 05:23:05.326302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:21:12.209 [2024-11-10 05:23:05.326311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.209 [2024-11-10 05:23:05.338691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.210 [2024-11-10 05:23:05.338744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:12.210 [2024-11-10 05:23:05.338757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.312 ms 00:21:12.210 [2024-11-10 05:23:05.338765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.210 [2024-11-10 05:23:05.338802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.210 [2024-11-10 05:23:05.338812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:12.210 [2024-11-10 05:23:05.338821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:12.210 [2024-11-10 05:23:05.338830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.210 [2024-11-10 05:23:05.339421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.210 [2024-11-10 05:23:05.339470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:12.210 [2024-11-10 05:23:05.339482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.536 ms 00:21:12.210 [2024-11-10 05:23:05.339492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.210 [2024-11-10 05:23:05.339655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.210 [2024-11-10 05:23:05.339667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:12.210 [2024-11-10 05:23:05.339677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.140 ms 00:21:12.210 [2024-11-10 05:23:05.339687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.210 [2024-11-10 05:23:05.346493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.210 [2024-11-10 05:23:05.346546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:12.210 [2024-11-10 05:23:05.346560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.780 ms 00:21:12.210 [2024-11-10 05:23:05.346569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.210 [2024-11-10 05:23:05.350333] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:21:12.210 [2024-11-10 05:23:05.350389] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:12.210 [2024-11-10 05:23:05.350402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.210 [2024-11-10 05:23:05.350410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:12.210 [2024-11-10 05:23:05.350419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.736 ms 00:21:12.210 [2024-11-10 05:23:05.350427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.210 [2024-11-10 05:23:05.365984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.210 [2024-11-10 05:23:05.366064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:12.210 [2024-11-10 05:23:05.366080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.505 ms 00:21:12.210 [2024-11-10 05:23:05.366088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.210 [2024-11-10 05:23:05.368863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.210 [2024-11-10 05:23:05.368915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:12.210 [2024-11-10 05:23:05.368926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.724 ms 00:21:12.210 [2024-11-10 05:23:05.368934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.210 [2024-11-10 05:23:05.371619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.210 [2024-11-10 05:23:05.371805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:12.210 [2024-11-10 05:23:05.371823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.637 ms 00:21:12.210 [2024-11-10 05:23:05.371831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.210 [2024-11-10 05:23:05.372226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.210 [2024-11-10 05:23:05.372245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:12.210 [2024-11-10 05:23:05.372260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.320 ms 00:21:12.210 [2024-11-10 05:23:05.372268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.210 [2024-11-10 05:23:05.397135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.210 [2024-11-10 05:23:05.397209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:12.210 [2024-11-10 05:23:05.397224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.848 ms 00:21:12.210 [2024-11-10 05:23:05.397234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.210 [2024-11-10 05:23:05.405406] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:12.210 [2024-11-10 05:23:05.408487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.210 [2024-11-10 05:23:05.408532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:12.210 [2024-11-10 05:23:05.408555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.197 ms 00:21:12.210 [2024-11-10 05:23:05.408563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.210 [2024-11-10 05:23:05.408641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.210 [2024-11-10 05:23:05.408658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:12.210 [2024-11-10 05:23:05.408672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:12.210 [2024-11-10 05:23:05.408680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.210 [2024-11-10 05:23:05.410381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.210 [2024-11-10 05:23:05.410546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:12.210 [2024-11-10 05:23:05.410565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.664 ms 00:21:12.210 [2024-11-10 05:23:05.410585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.210 [2024-11-10 05:23:05.410615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.210 [2024-11-10 05:23:05.410630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:12.210 [2024-11-10 05:23:05.410639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:12.210 [2024-11-10 05:23:05.410651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.210 [2024-11-10 05:23:05.410691] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:12.210 [2024-11-10 05:23:05.410701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.210 [2024-11-10 05:23:05.410710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:12.210 [2024-11-10 05:23:05.410718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:12.210 [2024-11-10 05:23:05.410726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.210 [2024-11-10 05:23:05.415809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.210 [2024-11-10 05:23:05.415860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:12.210 [2024-11-10 05:23:05.415885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.062 ms 00:21:12.210 [2024-11-10 05:23:05.415893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.210 [2024-11-10 05:23:05.415975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.210 [2024-11-10 05:23:05.415985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:12.210 [2024-11-10 05:23:05.416018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:21:12.210 [2024-11-10 05:23:05.416027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.210 [2024-11-10 05:23:05.417255] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 133.229 ms, result 0 00:21:13.601  [2024-11-10T05:23:07.781Z] Copying: 10/1024 [MB] (10 MBps) [2024-11-10T05:23:08.725Z] Copying: 27/1024 [MB] (16 MBps) [2024-11-10T05:23:09.669Z] Copying: 43/1024 [MB] (16 MBps) [2024-11-10T05:23:10.626Z] Copying: 62/1024 [MB] (18 MBps) [2024-11-10T05:23:12.032Z] Copying: 75/1024 [MB] (13 MBps) [2024-11-10T05:23:12.605Z] Copying: 90/1024 [MB] (14 MBps) [2024-11-10T05:23:13.994Z] Copying: 105/1024 [MB] (14 MBps) [2024-11-10T05:23:14.937Z] Copying: 123/1024 [MB] (18 MBps) [2024-11-10T05:23:15.880Z] Copying: 138/1024 [MB] (14 MBps) [2024-11-10T05:23:16.825Z] Copying: 156/1024 [MB] (18 MBps) [2024-11-10T05:23:17.768Z] Copying: 171/1024 [MB] (15 MBps) [2024-11-10T05:23:18.712Z] Copying: 182/1024 [MB] (10 MBps) [2024-11-10T05:23:19.655Z] Copying: 213/1024 [MB] (31 MBps) [2024-11-10T05:23:21.038Z] Copying: 228/1024 [MB] (14 MBps) [2024-11-10T05:23:21.610Z] Copying: 243/1024 [MB] (15 MBps) [2024-11-10T05:23:22.995Z] Copying: 255/1024 [MB] (11 MBps) [2024-11-10T05:23:23.939Z] Copying: 273/1024 [MB] (18 MBps) [2024-11-10T05:23:24.882Z] Copying: 291/1024 [MB] (17 MBps) [2024-11-10T05:23:25.867Z] Copying: 307/1024 [MB] (16 MBps) [2024-11-10T05:23:26.810Z] Copying: 323/1024 [MB] (16 MBps) [2024-11-10T05:23:27.754Z] Copying: 340/1024 [MB] (16 MBps) [2024-11-10T05:23:28.697Z] Copying: 360/1024 [MB] (20 MBps) [2024-11-10T05:23:29.640Z] Copying: 376/1024 [MB] (16 MBps) [2024-11-10T05:23:31.027Z] Copying: 395/1024 [MB] (18 MBps) [2024-11-10T05:23:31.970Z] Copying: 409/1024 [MB] (14 MBps) [2024-11-10T05:23:32.913Z] Copying: 420/1024 [MB] (10 MBps) [2024-11-10T05:23:33.856Z] Copying: 433/1024 [MB] (13 MBps) [2024-11-10T05:23:34.800Z] Copying: 449/1024 [MB] (16 MBps) [2024-11-10T05:23:35.744Z] Copying: 467/1024 [MB] (17 MBps) [2024-11-10T05:23:36.687Z] Copying: 481/1024 [MB] (14 MBps) [2024-11-10T05:23:37.629Z] Copying: 492/1024 [MB] (10 MBps) [2024-11-10T05:23:39.014Z] Copying: 512/1024 [MB] (19 MBps) [2024-11-10T05:23:39.966Z] Copying: 524/1024 [MB] (12 MBps) [2024-11-10T05:23:40.908Z] Copying: 536/1024 [MB] (11 MBps) [2024-11-10T05:23:41.852Z] Copying: 550/1024 [MB] (13 MBps) [2024-11-10T05:23:42.797Z] Copying: 561/1024 [MB] (11 MBps) [2024-11-10T05:23:43.738Z] Copying: 573/1024 [MB] (11 MBps) [2024-11-10T05:23:44.678Z] Copying: 585/1024 [MB] (12 MBps) [2024-11-10T05:23:45.621Z] Copying: 596/1024 [MB] (11 MBps) [2024-11-10T05:23:47.008Z] Copying: 611/1024 [MB] (14 MBps) [2024-11-10T05:23:47.950Z] Copying: 624/1024 [MB] (12 MBps) [2024-11-10T05:23:48.893Z] Copying: 643/1024 [MB] (19 MBps) [2024-11-10T05:23:49.842Z] Copying: 654/1024 [MB] (11 MBps) [2024-11-10T05:23:50.789Z] Copying: 671/1024 [MB] (16 MBps) [2024-11-10T05:23:51.730Z] Copying: 682/1024 [MB] (11 MBps) [2024-11-10T05:23:52.671Z] Copying: 693/1024 [MB] (10 MBps) [2024-11-10T05:23:53.611Z] Copying: 706/1024 [MB] (12 MBps) [2024-11-10T05:23:55.003Z] Copying: 727/1024 [MB] (21 MBps) [2024-11-10T05:23:55.940Z] Copying: 740/1024 [MB] (13 MBps) [2024-11-10T05:23:56.879Z] Copying: 770/1024 [MB] (29 MBps) [2024-11-10T05:23:57.820Z] Copying: 785/1024 [MB] (15 MBps) [2024-11-10T05:23:58.760Z] Copying: 805/1024 [MB] (19 MBps) [2024-11-10T05:23:59.701Z] Copying: 816/1024 [MB] (11 MBps) [2024-11-10T05:24:00.641Z] Copying: 826/1024 [MB] (10 MBps) [2024-11-10T05:24:02.024Z] Copying: 844/1024 [MB] (17 MBps) [2024-11-10T05:24:02.965Z] Copying: 859/1024 [MB] (15 MBps) [2024-11-10T05:24:03.906Z] Copying: 870/1024 [MB] (10 MBps) [2024-11-10T05:24:04.850Z] Copying: 880/1024 [MB] (10 MBps) [2024-11-10T05:24:05.794Z] Copying: 891/1024 [MB] (10 MBps) [2024-11-10T05:24:06.733Z] Copying: 901/1024 [MB] (10 MBps) [2024-11-10T05:24:07.673Z] Copying: 912/1024 [MB] (10 MBps) [2024-11-10T05:24:08.621Z] Copying: 930/1024 [MB] (18 MBps) [2024-11-10T05:24:10.003Z] Copying: 942/1024 [MB] (11 MBps) [2024-11-10T05:24:10.945Z] Copying: 959/1024 [MB] (16 MBps) [2024-11-10T05:24:11.888Z] Copying: 975/1024 [MB] (16 MBps) [2024-11-10T05:24:12.828Z] Copying: 988/1024 [MB] (12 MBps) [2024-11-10T05:24:13.773Z] Copying: 1004/1024 [MB] (16 MBps) [2024-11-10T05:24:13.773Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-10 05:24:13.573165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.537 [2024-11-10 05:24:13.573246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:20.537 [2024-11-10 05:24:13.573261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:20.537 [2024-11-10 05:24:13.573271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.537 [2024-11-10 05:24:13.573293] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:20.537 [2024-11-10 05:24:13.574107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.537 [2024-11-10 05:24:13.574135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:20.537 [2024-11-10 05:24:13.574149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.792 ms 00:22:20.537 [2024-11-10 05:24:13.574158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.537 [2024-11-10 05:24:13.574387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.537 [2024-11-10 05:24:13.574398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:20.537 [2024-11-10 05:24:13.574408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.197 ms 00:22:20.537 [2024-11-10 05:24:13.574416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.537 [2024-11-10 05:24:13.580725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.537 [2024-11-10 05:24:13.580948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:20.537 [2024-11-10 05:24:13.580971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.293 ms 00:22:20.537 [2024-11-10 05:24:13.580979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.537 [2024-11-10 05:24:13.587727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.537 [2024-11-10 05:24:13.587902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:20.537 [2024-11-10 05:24:13.587970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.691 ms 00:22:20.537 [2024-11-10 05:24:13.588008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.537 [2024-11-10 05:24:13.591169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.537 [2024-11-10 05:24:13.591338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:20.537 [2024-11-10 05:24:13.591408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.053 ms 00:22:20.537 [2024-11-10 05:24:13.591431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.537 [2024-11-10 05:24:13.595913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.537 [2024-11-10 05:24:13.596120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:20.537 [2024-11-10 05:24:13.596220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.435 ms 00:22:20.537 [2024-11-10 05:24:13.596247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.799 [2024-11-10 05:24:13.850635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.799 [2024-11-10 05:24:13.850817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:20.799 [2024-11-10 05:24:13.850886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 254.333 ms 00:22:20.799 [2024-11-10 05:24:13.850910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.799 [2024-11-10 05:24:13.853588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.799 [2024-11-10 05:24:13.853754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:20.799 [2024-11-10 05:24:13.853818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.641 ms 00:22:20.799 [2024-11-10 05:24:13.853840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.799 [2024-11-10 05:24:13.855727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.799 [2024-11-10 05:24:13.855903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:20.799 [2024-11-10 05:24:13.855965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.840 ms 00:22:20.799 [2024-11-10 05:24:13.856003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.799 [2024-11-10 05:24:13.857767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.800 [2024-11-10 05:24:13.857930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:20.800 [2024-11-10 05:24:13.858010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.716 ms 00:22:20.800 [2024-11-10 05:24:13.858034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.800 [2024-11-10 05:24:13.859745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.800 [2024-11-10 05:24:13.859909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:20.800 [2024-11-10 05:24:13.859974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.636 ms 00:22:20.800 [2024-11-10 05:24:13.860024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.800 [2024-11-10 05:24:13.860082] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:20.800 [2024-11-10 05:24:13.860114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:22:20.800 [2024-11-10 05:24:13.860236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:20.800 [2024-11-10 05:24:13.860270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:20.800 [2024-11-10 05:24:13.860300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:20.800 [2024-11-10 05:24:13.860369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:20.800 [2024-11-10 05:24:13.860401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:20.800 [2024-11-10 05:24:13.860431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:20.800 [2024-11-10 05:24:13.860460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:20.800 [2024-11-10 05:24:13.860520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:20.800 [2024-11-10 05:24:13.860551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:20.800 [2024-11-10 05:24:13.860579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:20.800 [2024-11-10 05:24:13.860643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:20.800 [2024-11-10 05:24:13.860673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:20.800 [2024-11-10 05:24:13.860703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:20.800 [2024-11-10 05:24:13.860755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:20.800 [2024-11-10 05:24:13.860923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:20.800 [2024-11-10 05:24:13.861022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:20.800 [2024-11-10 05:24:13.861095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:20.800 [2024-11-10 05:24:13.861128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:20.800 [2024-11-10 05:24:13.861184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:20.800 [2024-11-10 05:24:13.861285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:20.800 [2024-11-10 05:24:13.861317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:20.800 [2024-11-10 05:24:13.861345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:20.800 [2024-11-10 05:24:13.861451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:20.800 [2024-11-10 05:24:13.861483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:20.800 [2024-11-10 05:24:13.861513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:20.800 [2024-11-10 05:24:13.861591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:20.800 [2024-11-10 05:24:13.861600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:20.800 [2024-11-10 05:24:13.861609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:20.800 [2024-11-10 05:24:13.861617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:20.800 [2024-11-10 05:24:13.861625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:20.800 [2024-11-10 05:24:13.861632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:20.800 [2024-11-10 05:24:13.861640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:20.800 [2024-11-10 05:24:13.861647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:20.800 [2024-11-10 05:24:13.861655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:20.800 [2024-11-10 05:24:13.861663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:20.800 [2024-11-10 05:24:13.861671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:20.800 [2024-11-10 05:24:13.861679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:20.800 [2024-11-10 05:24:13.861687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:20.800 [2024-11-10 05:24:13.861695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:20.800 [2024-11-10 05:24:13.861702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:20.800 [2024-11-10 05:24:13.861710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:20.800 [2024-11-10 05:24:13.861717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:20.800 [2024-11-10 05:24:13.861725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:20.800 [2024-11-10 05:24:13.861734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:20.800 [2024-11-10 05:24:13.861741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:20.800 [2024-11-10 05:24:13.861749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:20.800 [2024-11-10 05:24:13.861756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:20.800 [2024-11-10 05:24:13.861764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:20.800 [2024-11-10 05:24:13.861772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:20.800 [2024-11-10 05:24:13.861780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:20.800 [2024-11-10 05:24:13.861787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:20.800 [2024-11-10 05:24:13.861795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:20.800 [2024-11-10 05:24:13.861803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:20.800 [2024-11-10 05:24:13.861811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:20.800 [2024-11-10 05:24:13.861819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:20.800 [2024-11-10 05:24:13.861828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:20.800 [2024-11-10 05:24:13.861835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:20.800 [2024-11-10 05:24:13.861843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:20.800 [2024-11-10 05:24:13.861851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:20.800 [2024-11-10 05:24:13.861858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:20.800 [2024-11-10 05:24:13.861865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:20.800 [2024-11-10 05:24:13.861873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:20.800 [2024-11-10 05:24:13.861880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:20.800 [2024-11-10 05:24:13.861889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:20.800 [2024-11-10 05:24:13.861896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:20.800 [2024-11-10 05:24:13.861905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:20.800 [2024-11-10 05:24:13.861912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:20.800 [2024-11-10 05:24:13.861920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:20.800 [2024-11-10 05:24:13.861928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:20.800 [2024-11-10 05:24:13.861936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:20.800 [2024-11-10 05:24:13.861943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:20.800 [2024-11-10 05:24:13.861951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:20.800 [2024-11-10 05:24:13.861959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:20.800 [2024-11-10 05:24:13.861967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:20.800 [2024-11-10 05:24:13.861974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:20.800 [2024-11-10 05:24:13.861982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:20.800 [2024-11-10 05:24:13.862005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:20.801 [2024-11-10 05:24:13.862013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:20.801 [2024-11-10 05:24:13.862022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:20.801 [2024-11-10 05:24:13.862029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:20.801 [2024-11-10 05:24:13.862037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:20.801 [2024-11-10 05:24:13.862045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:20.801 [2024-11-10 05:24:13.862052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:20.801 [2024-11-10 05:24:13.862060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:20.801 [2024-11-10 05:24:13.862069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:20.801 [2024-11-10 05:24:13.862077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:20.801 [2024-11-10 05:24:13.862085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:20.801 [2024-11-10 05:24:13.862093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:20.801 [2024-11-10 05:24:13.862101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:20.801 [2024-11-10 05:24:13.862108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:20.801 [2024-11-10 05:24:13.862115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:20.801 [2024-11-10 05:24:13.862123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:20.801 [2024-11-10 05:24:13.862130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:20.801 [2024-11-10 05:24:13.862139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:20.801 [2024-11-10 05:24:13.862147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:20.801 [2024-11-10 05:24:13.862154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:20.801 [2024-11-10 05:24:13.862162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:20.801 [2024-11-10 05:24:13.862172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:20.801 [2024-11-10 05:24:13.862180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:20.801 [2024-11-10 05:24:13.862197] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:20.801 [2024-11-10 05:24:13.862206] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 009e53e2-83fe-4f5e-b934-8f419491318d 00:22:20.801 [2024-11-10 05:24:13.862214] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:22:20.801 [2024-11-10 05:24:13.862223] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32960 00:22:20.801 [2024-11-10 05:24:13.862231] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 32000 00:22:20.801 [2024-11-10 05:24:13.862254] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0300 00:22:20.801 [2024-11-10 05:24:13.862262] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:20.801 [2024-11-10 05:24:13.862270] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:20.801 [2024-11-10 05:24:13.862278] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:20.801 [2024-11-10 05:24:13.862285] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:20.801 [2024-11-10 05:24:13.862291] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:20.801 [2024-11-10 05:24:13.862300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.801 [2024-11-10 05:24:13.862308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:20.801 [2024-11-10 05:24:13.862317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.219 ms 00:22:20.801 [2024-11-10 05:24:13.862324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.801 [2024-11-10 05:24:13.864833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.801 [2024-11-10 05:24:13.864884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:20.801 [2024-11-10 05:24:13.864895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.486 ms 00:22:20.801 [2024-11-10 05:24:13.864904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.801 [2024-11-10 05:24:13.865056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.801 [2024-11-10 05:24:13.865066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:20.801 [2024-11-10 05:24:13.865076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:22:20.801 [2024-11-10 05:24:13.865084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.801 [2024-11-10 05:24:13.872138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:20.801 [2024-11-10 05:24:13.872186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:20.801 [2024-11-10 05:24:13.872197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:20.801 [2024-11-10 05:24:13.872205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.801 [2024-11-10 05:24:13.872267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:20.801 [2024-11-10 05:24:13.872276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:20.801 [2024-11-10 05:24:13.872284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:20.801 [2024-11-10 05:24:13.872292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.801 [2024-11-10 05:24:13.872357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:20.801 [2024-11-10 05:24:13.872378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:20.801 [2024-11-10 05:24:13.872387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:20.801 [2024-11-10 05:24:13.872395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.801 [2024-11-10 05:24:13.872410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:20.801 [2024-11-10 05:24:13.872418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:20.801 [2024-11-10 05:24:13.872425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:20.801 [2024-11-10 05:24:13.872436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.801 [2024-11-10 05:24:13.885959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:20.801 [2024-11-10 05:24:13.886037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:20.801 [2024-11-10 05:24:13.886049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:20.801 [2024-11-10 05:24:13.886063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.801 [2024-11-10 05:24:13.896202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:20.801 [2024-11-10 05:24:13.896254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:20.801 [2024-11-10 05:24:13.896267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:20.801 [2024-11-10 05:24:13.896276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.801 [2024-11-10 05:24:13.896325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:20.801 [2024-11-10 05:24:13.896335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:20.801 [2024-11-10 05:24:13.896351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:20.801 [2024-11-10 05:24:13.896360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.801 [2024-11-10 05:24:13.896396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:20.801 [2024-11-10 05:24:13.896406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:20.801 [2024-11-10 05:24:13.896414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:20.801 [2024-11-10 05:24:13.896423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.801 [2024-11-10 05:24:13.896488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:20.801 [2024-11-10 05:24:13.896498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:20.801 [2024-11-10 05:24:13.896506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:20.801 [2024-11-10 05:24:13.896518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.801 [2024-11-10 05:24:13.896546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:20.801 [2024-11-10 05:24:13.896555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:20.801 [2024-11-10 05:24:13.896563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:20.801 [2024-11-10 05:24:13.896572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.801 [2024-11-10 05:24:13.896610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:20.801 [2024-11-10 05:24:13.896625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:20.801 [2024-11-10 05:24:13.896641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:20.801 [2024-11-10 05:24:13.896651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.801 [2024-11-10 05:24:13.896699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:20.801 [2024-11-10 05:24:13.896709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:20.801 [2024-11-10 05:24:13.896718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:20.801 [2024-11-10 05:24:13.896726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.801 [2024-11-10 05:24:13.896853] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 323.654 ms, result 0 00:22:21.062 00:22:21.062 00:22:21.062 05:24:14 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:23.609 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:22:23.609 05:24:16 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:22:23.609 05:24:16 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:22:23.609 05:24:16 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:23.609 05:24:16 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:23.609 05:24:16 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:23.609 Process with pid 86173 is not found 00:22:23.609 Remove shared memory files 00:22:23.609 05:24:16 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 86173 00:22:23.609 05:24:16 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 86173 ']' 00:22:23.609 05:24:16 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 86173 00:22:23.609 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (86173) - No such process 00:22:23.609 05:24:16 ftl.ftl_restore -- common/autotest_common.sh@977 -- # echo 'Process with pid 86173 is not found' 00:22:23.609 05:24:16 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:22:23.609 05:24:16 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:22:23.609 05:24:16 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:22:23.609 05:24:16 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:22:23.609 05:24:16 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:22:23.609 05:24:16 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:22:23.609 05:24:16 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:22:23.609 ************************************ 00:22:23.609 END TEST ftl_restore 00:22:23.609 ************************************ 00:22:23.609 00:22:23.609 real 4m32.578s 00:22:23.609 user 4m20.527s 00:22:23.609 sys 0m11.967s 00:22:23.609 05:24:16 ftl.ftl_restore -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:23.609 05:24:16 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:22:23.609 05:24:16 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:22:23.609 05:24:16 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:22:23.609 05:24:16 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:23.609 05:24:16 ftl -- common/autotest_common.sh@10 -- # set +x 00:22:23.609 ************************************ 00:22:23.609 START TEST ftl_dirty_shutdown 00:22:23.609 ************************************ 00:22:23.609 05:24:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:22:23.609 * Looking for test storage... 00:22:23.609 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:23.609 05:24:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:22:23.609 05:24:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:22:23.609 05:24:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:22:23.609 05:24:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:22:23.609 05:24:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:23.609 05:24:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:23.609 05:24:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:23.609 05:24:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:22:23.609 05:24:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:22:23.609 05:24:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:22:23.609 05:24:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:22:23.609 05:24:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:22:23.609 05:24:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:22:23.609 05:24:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:22:23.609 05:24:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:23.609 05:24:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:22:23.609 05:24:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:22:23.609 05:24:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:23.609 05:24:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:23.609 05:24:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:22:23.609 05:24:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:22:23.609 05:24:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:23.609 05:24:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:22:23.609 05:24:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:22:23.609 05:24:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:22:23.609 05:24:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:22:23.609 05:24:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:23.609 05:24:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:22:23.609 05:24:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:22:23.609 05:24:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:23.609 05:24:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:23.609 05:24:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:22:23.609 05:24:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:23.609 05:24:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:22:23.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:23.609 --rc genhtml_branch_coverage=1 00:22:23.609 --rc genhtml_function_coverage=1 00:22:23.609 --rc genhtml_legend=1 00:22:23.609 --rc geninfo_all_blocks=1 00:22:23.609 --rc geninfo_unexecuted_blocks=1 00:22:23.609 00:22:23.609 ' 00:22:23.609 05:24:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:22:23.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:23.609 --rc genhtml_branch_coverage=1 00:22:23.609 --rc genhtml_function_coverage=1 00:22:23.609 --rc genhtml_legend=1 00:22:23.609 --rc geninfo_all_blocks=1 00:22:23.609 --rc geninfo_unexecuted_blocks=1 00:22:23.609 00:22:23.609 ' 00:22:23.609 05:24:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:22:23.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:23.609 --rc genhtml_branch_coverage=1 00:22:23.609 --rc genhtml_function_coverage=1 00:22:23.609 --rc genhtml_legend=1 00:22:23.609 --rc geninfo_all_blocks=1 00:22:23.609 --rc geninfo_unexecuted_blocks=1 00:22:23.609 00:22:23.609 ' 00:22:23.610 05:24:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:22:23.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:23.610 --rc genhtml_branch_coverage=1 00:22:23.610 --rc genhtml_function_coverage=1 00:22:23.610 --rc genhtml_legend=1 00:22:23.610 --rc geninfo_all_blocks=1 00:22:23.610 --rc geninfo_unexecuted_blocks=1 00:22:23.610 00:22:23.610 ' 00:22:23.610 05:24:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:23.610 05:24:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:22:23.610 05:24:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:23.610 05:24:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:23.610 05:24:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:23.610 05:24:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:23.610 05:24:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:23.610 05:24:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:23.610 05:24:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:23.610 05:24:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:23.610 05:24:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:23.610 05:24:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:23.610 05:24:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:23.610 05:24:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:23.610 05:24:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:23.610 05:24:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:23.610 05:24:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:23.610 05:24:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:23.610 05:24:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:23.610 05:24:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:23.610 05:24:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:23.610 05:24:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:23.610 05:24:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:23.610 05:24:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:23.610 05:24:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:23.610 05:24:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:23.610 05:24:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:23.610 05:24:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:23.610 05:24:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:23.610 05:24:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:23.610 05:24:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:23.610 05:24:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:22:23.610 05:24:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:22:23.610 05:24:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:22:23.610 05:24:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:22:23.610 05:24:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:22:23.610 05:24:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:22:23.610 05:24:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:22:23.610 05:24:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:22:23.610 05:24:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:22:23.610 05:24:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:22:23.610 05:24:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:22:23.610 05:24:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=89066 00:22:23.610 05:24:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:22:23.610 05:24:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 89066 00:22:23.610 05:24:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@831 -- # '[' -z 89066 ']' 00:22:23.610 05:24:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:23.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:23.610 05:24:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:23.610 05:24:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:23.610 05:24:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:23.610 05:24:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:23.610 [2024-11-10 05:24:16.796911] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:22:23.610 [2024-11-10 05:24:16.797309] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89066 ] 00:22:23.871 [2024-11-10 05:24:16.950434] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:23.871 [2024-11-10 05:24:16.999067] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:22:24.443 05:24:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:24.443 05:24:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # return 0 00:22:24.443 05:24:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:22:24.443 05:24:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:22:24.443 05:24:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:22:24.443 05:24:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:22:24.443 05:24:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:22:24.443 05:24:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:22:25.014 05:24:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:22:25.014 05:24:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:22:25.014 05:24:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:22:25.014 05:24:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:22:25.014 05:24:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:25.014 05:24:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:22:25.014 05:24:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:22:25.014 05:24:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:22:25.014 05:24:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:25.014 { 00:22:25.014 "name": "nvme0n1", 00:22:25.014 "aliases": [ 00:22:25.014 "a07a4897-c06d-4ab9-b422-0a4ef580a79c" 00:22:25.014 ], 00:22:25.014 "product_name": "NVMe disk", 00:22:25.014 "block_size": 4096, 00:22:25.014 "num_blocks": 1310720, 00:22:25.014 "uuid": "a07a4897-c06d-4ab9-b422-0a4ef580a79c", 00:22:25.014 "numa_id": -1, 00:22:25.014 "assigned_rate_limits": { 00:22:25.014 "rw_ios_per_sec": 0, 00:22:25.014 "rw_mbytes_per_sec": 0, 00:22:25.014 "r_mbytes_per_sec": 0, 00:22:25.014 "w_mbytes_per_sec": 0 00:22:25.014 }, 00:22:25.014 "claimed": true, 00:22:25.014 "claim_type": "read_many_write_one", 00:22:25.014 "zoned": false, 00:22:25.014 "supported_io_types": { 00:22:25.014 "read": true, 00:22:25.014 "write": true, 00:22:25.014 "unmap": true, 00:22:25.014 "flush": true, 00:22:25.014 "reset": true, 00:22:25.014 "nvme_admin": true, 00:22:25.014 "nvme_io": true, 00:22:25.014 "nvme_io_md": false, 00:22:25.014 "write_zeroes": true, 00:22:25.014 "zcopy": false, 00:22:25.014 "get_zone_info": false, 00:22:25.014 "zone_management": false, 00:22:25.014 "zone_append": false, 00:22:25.014 "compare": true, 00:22:25.014 "compare_and_write": false, 00:22:25.014 "abort": true, 00:22:25.014 "seek_hole": false, 00:22:25.014 "seek_data": false, 00:22:25.014 "copy": true, 00:22:25.014 "nvme_iov_md": false 00:22:25.014 }, 00:22:25.014 "driver_specific": { 00:22:25.014 "nvme": [ 00:22:25.014 { 00:22:25.014 "pci_address": "0000:00:11.0", 00:22:25.014 "trid": { 00:22:25.014 "trtype": "PCIe", 00:22:25.014 "traddr": "0000:00:11.0" 00:22:25.014 }, 00:22:25.014 "ctrlr_data": { 00:22:25.014 "cntlid": 0, 00:22:25.014 "vendor_id": "0x1b36", 00:22:25.014 "model_number": "QEMU NVMe Ctrl", 00:22:25.014 "serial_number": "12341", 00:22:25.014 "firmware_revision": "8.0.0", 00:22:25.014 "subnqn": "nqn.2019-08.org.qemu:12341", 00:22:25.014 "oacs": { 00:22:25.014 "security": 0, 00:22:25.014 "format": 1, 00:22:25.014 "firmware": 0, 00:22:25.014 "ns_manage": 1 00:22:25.014 }, 00:22:25.014 "multi_ctrlr": false, 00:22:25.014 "ana_reporting": false 00:22:25.014 }, 00:22:25.014 "vs": { 00:22:25.014 "nvme_version": "1.4" 00:22:25.014 }, 00:22:25.014 "ns_data": { 00:22:25.014 "id": 1, 00:22:25.014 "can_share": false 00:22:25.014 } 00:22:25.015 } 00:22:25.015 ], 00:22:25.015 "mp_policy": "active_passive" 00:22:25.015 } 00:22:25.015 } 00:22:25.015 ]' 00:22:25.015 05:24:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:25.015 05:24:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:22:25.015 05:24:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:25.015 05:24:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:22:25.015 05:24:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:22:25.015 05:24:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:22:25.015 05:24:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:22:25.015 05:24:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:22:25.015 05:24:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:22:25.015 05:24:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:25.015 05:24:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:25.275 05:24:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=a72ff7d2-715c-4eaa-b7c3-c0e4fae23ed3 00:22:25.275 05:24:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:22:25.275 05:24:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a72ff7d2-715c-4eaa-b7c3-c0e4fae23ed3 00:22:25.846 05:24:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:22:25.846 05:24:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=d3e344e7-d9fb-40f6-91ca-cdd732c760d5 00:22:25.846 05:24:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u d3e344e7-d9fb-40f6-91ca-cdd732c760d5 00:22:26.108 05:24:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=225601c5-44a3-4cae-a863-0c05499175fa 00:22:26.108 05:24:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:22:26.108 05:24:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 225601c5-44a3-4cae-a863-0c05499175fa 00:22:26.108 05:24:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:22:26.108 05:24:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:22:26.108 05:24:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=225601c5-44a3-4cae-a863-0c05499175fa 00:22:26.108 05:24:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:22:26.108 05:24:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 225601c5-44a3-4cae-a863-0c05499175fa 00:22:26.108 05:24:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=225601c5-44a3-4cae-a863-0c05499175fa 00:22:26.108 05:24:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:26.108 05:24:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:22:26.108 05:24:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:22:26.108 05:24:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 225601c5-44a3-4cae-a863-0c05499175fa 00:22:26.369 05:24:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:26.369 { 00:22:26.369 "name": "225601c5-44a3-4cae-a863-0c05499175fa", 00:22:26.369 "aliases": [ 00:22:26.369 "lvs/nvme0n1p0" 00:22:26.369 ], 00:22:26.369 "product_name": "Logical Volume", 00:22:26.369 "block_size": 4096, 00:22:26.369 "num_blocks": 26476544, 00:22:26.369 "uuid": "225601c5-44a3-4cae-a863-0c05499175fa", 00:22:26.369 "assigned_rate_limits": { 00:22:26.369 "rw_ios_per_sec": 0, 00:22:26.369 "rw_mbytes_per_sec": 0, 00:22:26.369 "r_mbytes_per_sec": 0, 00:22:26.369 "w_mbytes_per_sec": 0 00:22:26.369 }, 00:22:26.369 "claimed": false, 00:22:26.369 "zoned": false, 00:22:26.369 "supported_io_types": { 00:22:26.369 "read": true, 00:22:26.369 "write": true, 00:22:26.369 "unmap": true, 00:22:26.369 "flush": false, 00:22:26.369 "reset": true, 00:22:26.369 "nvme_admin": false, 00:22:26.369 "nvme_io": false, 00:22:26.369 "nvme_io_md": false, 00:22:26.369 "write_zeroes": true, 00:22:26.369 "zcopy": false, 00:22:26.369 "get_zone_info": false, 00:22:26.369 "zone_management": false, 00:22:26.369 "zone_append": false, 00:22:26.369 "compare": false, 00:22:26.369 "compare_and_write": false, 00:22:26.369 "abort": false, 00:22:26.369 "seek_hole": true, 00:22:26.369 "seek_data": true, 00:22:26.369 "copy": false, 00:22:26.369 "nvme_iov_md": false 00:22:26.369 }, 00:22:26.369 "driver_specific": { 00:22:26.369 "lvol": { 00:22:26.369 "lvol_store_uuid": "d3e344e7-d9fb-40f6-91ca-cdd732c760d5", 00:22:26.369 "base_bdev": "nvme0n1", 00:22:26.369 "thin_provision": true, 00:22:26.369 "num_allocated_clusters": 0, 00:22:26.369 "snapshot": false, 00:22:26.369 "clone": false, 00:22:26.369 "esnap_clone": false 00:22:26.369 } 00:22:26.369 } 00:22:26.369 } 00:22:26.369 ]' 00:22:26.369 05:24:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:26.369 05:24:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:22:26.369 05:24:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:26.369 05:24:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:22:26.369 05:24:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:22:26.369 05:24:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:22:26.369 05:24:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:22:26.369 05:24:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:22:26.369 05:24:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:22:26.629 05:24:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:22:26.629 05:24:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:22:26.629 05:24:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 225601c5-44a3-4cae-a863-0c05499175fa 00:22:26.629 05:24:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=225601c5-44a3-4cae-a863-0c05499175fa 00:22:26.629 05:24:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:26.629 05:24:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:22:26.629 05:24:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:22:26.629 05:24:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 225601c5-44a3-4cae-a863-0c05499175fa 00:22:26.888 05:24:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:26.888 { 00:22:26.888 "name": "225601c5-44a3-4cae-a863-0c05499175fa", 00:22:26.888 "aliases": [ 00:22:26.888 "lvs/nvme0n1p0" 00:22:26.888 ], 00:22:26.888 "product_name": "Logical Volume", 00:22:26.888 "block_size": 4096, 00:22:26.888 "num_blocks": 26476544, 00:22:26.888 "uuid": "225601c5-44a3-4cae-a863-0c05499175fa", 00:22:26.888 "assigned_rate_limits": { 00:22:26.888 "rw_ios_per_sec": 0, 00:22:26.888 "rw_mbytes_per_sec": 0, 00:22:26.888 "r_mbytes_per_sec": 0, 00:22:26.888 "w_mbytes_per_sec": 0 00:22:26.888 }, 00:22:26.888 "claimed": false, 00:22:26.888 "zoned": false, 00:22:26.888 "supported_io_types": { 00:22:26.888 "read": true, 00:22:26.888 "write": true, 00:22:26.888 "unmap": true, 00:22:26.888 "flush": false, 00:22:26.888 "reset": true, 00:22:26.888 "nvme_admin": false, 00:22:26.888 "nvme_io": false, 00:22:26.888 "nvme_io_md": false, 00:22:26.888 "write_zeroes": true, 00:22:26.888 "zcopy": false, 00:22:26.888 "get_zone_info": false, 00:22:26.888 "zone_management": false, 00:22:26.888 "zone_append": false, 00:22:26.888 "compare": false, 00:22:26.888 "compare_and_write": false, 00:22:26.888 "abort": false, 00:22:26.888 "seek_hole": true, 00:22:26.888 "seek_data": true, 00:22:26.888 "copy": false, 00:22:26.888 "nvme_iov_md": false 00:22:26.888 }, 00:22:26.888 "driver_specific": { 00:22:26.888 "lvol": { 00:22:26.888 "lvol_store_uuid": "d3e344e7-d9fb-40f6-91ca-cdd732c760d5", 00:22:26.888 "base_bdev": "nvme0n1", 00:22:26.888 "thin_provision": true, 00:22:26.888 "num_allocated_clusters": 0, 00:22:26.888 "snapshot": false, 00:22:26.888 "clone": false, 00:22:26.888 "esnap_clone": false 00:22:26.888 } 00:22:26.888 } 00:22:26.888 } 00:22:26.888 ]' 00:22:26.888 05:24:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:26.888 05:24:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:22:26.888 05:24:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:26.888 05:24:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:22:26.888 05:24:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:22:26.889 05:24:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:22:26.889 05:24:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:22:26.889 05:24:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:22:27.148 05:24:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:22:27.148 05:24:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 225601c5-44a3-4cae-a863-0c05499175fa 00:22:27.148 05:24:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=225601c5-44a3-4cae-a863-0c05499175fa 00:22:27.148 05:24:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:27.148 05:24:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:22:27.148 05:24:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:22:27.148 05:24:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 225601c5-44a3-4cae-a863-0c05499175fa 00:22:27.409 05:24:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:27.409 { 00:22:27.409 "name": "225601c5-44a3-4cae-a863-0c05499175fa", 00:22:27.409 "aliases": [ 00:22:27.409 "lvs/nvme0n1p0" 00:22:27.409 ], 00:22:27.409 "product_name": "Logical Volume", 00:22:27.409 "block_size": 4096, 00:22:27.409 "num_blocks": 26476544, 00:22:27.409 "uuid": "225601c5-44a3-4cae-a863-0c05499175fa", 00:22:27.409 "assigned_rate_limits": { 00:22:27.409 "rw_ios_per_sec": 0, 00:22:27.409 "rw_mbytes_per_sec": 0, 00:22:27.409 "r_mbytes_per_sec": 0, 00:22:27.409 "w_mbytes_per_sec": 0 00:22:27.409 }, 00:22:27.409 "claimed": false, 00:22:27.409 "zoned": false, 00:22:27.409 "supported_io_types": { 00:22:27.409 "read": true, 00:22:27.409 "write": true, 00:22:27.409 "unmap": true, 00:22:27.409 "flush": false, 00:22:27.409 "reset": true, 00:22:27.409 "nvme_admin": false, 00:22:27.409 "nvme_io": false, 00:22:27.409 "nvme_io_md": false, 00:22:27.409 "write_zeroes": true, 00:22:27.409 "zcopy": false, 00:22:27.409 "get_zone_info": false, 00:22:27.409 "zone_management": false, 00:22:27.409 "zone_append": false, 00:22:27.409 "compare": false, 00:22:27.409 "compare_and_write": false, 00:22:27.409 "abort": false, 00:22:27.409 "seek_hole": true, 00:22:27.409 "seek_data": true, 00:22:27.409 "copy": false, 00:22:27.409 "nvme_iov_md": false 00:22:27.409 }, 00:22:27.409 "driver_specific": { 00:22:27.409 "lvol": { 00:22:27.409 "lvol_store_uuid": "d3e344e7-d9fb-40f6-91ca-cdd732c760d5", 00:22:27.409 "base_bdev": "nvme0n1", 00:22:27.409 "thin_provision": true, 00:22:27.409 "num_allocated_clusters": 0, 00:22:27.409 "snapshot": false, 00:22:27.409 "clone": false, 00:22:27.409 "esnap_clone": false 00:22:27.409 } 00:22:27.409 } 00:22:27.409 } 00:22:27.409 ]' 00:22:27.409 05:24:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:27.409 05:24:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:22:27.409 05:24:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:27.409 05:24:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:22:27.409 05:24:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:22:27.409 05:24:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:22:27.409 05:24:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:22:27.409 05:24:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 225601c5-44a3-4cae-a863-0c05499175fa --l2p_dram_limit 10' 00:22:27.409 05:24:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:22:27.409 05:24:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:22:27.409 05:24:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:22:27.409 05:24:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 225601c5-44a3-4cae-a863-0c05499175fa --l2p_dram_limit 10 -c nvc0n1p0 00:22:27.671 [2024-11-10 05:24:20.702458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.671 [2024-11-10 05:24:20.702498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:27.671 [2024-11-10 05:24:20.702509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:27.671 [2024-11-10 05:24:20.702517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.671 [2024-11-10 05:24:20.702560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.671 [2024-11-10 05:24:20.702569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:27.671 [2024-11-10 05:24:20.702575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:22:27.671 [2024-11-10 05:24:20.702587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.671 [2024-11-10 05:24:20.702606] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:27.671 [2024-11-10 05:24:20.702821] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:27.671 [2024-11-10 05:24:20.702832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.671 [2024-11-10 05:24:20.702840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:27.671 [2024-11-10 05:24:20.702848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.233 ms 00:22:27.671 [2024-11-10 05:24:20.702855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.671 [2024-11-10 05:24:20.702879] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 63757667-739a-4258-a1db-5238ea6e5f8e 00:22:27.671 [2024-11-10 05:24:20.703879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.671 [2024-11-10 05:24:20.703900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:22:27.671 [2024-11-10 05:24:20.703909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:22:27.671 [2024-11-10 05:24:20.703915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.671 [2024-11-10 05:24:20.708635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.671 [2024-11-10 05:24:20.708662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:27.671 [2024-11-10 05:24:20.708671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.662 ms 00:22:27.671 [2024-11-10 05:24:20.708677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.671 [2024-11-10 05:24:20.708736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.671 [2024-11-10 05:24:20.708748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:27.671 [2024-11-10 05:24:20.708759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:22:27.671 [2024-11-10 05:24:20.708766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.671 [2024-11-10 05:24:20.708815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.671 [2024-11-10 05:24:20.708823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:27.671 [2024-11-10 05:24:20.708833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:27.671 [2024-11-10 05:24:20.708838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.671 [2024-11-10 05:24:20.708857] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:27.671 [2024-11-10 05:24:20.710109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.671 [2024-11-10 05:24:20.710216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:27.671 [2024-11-10 05:24:20.710230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.259 ms 00:22:27.671 [2024-11-10 05:24:20.710238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.671 [2024-11-10 05:24:20.710265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.671 [2024-11-10 05:24:20.710273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:27.671 [2024-11-10 05:24:20.710279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:27.671 [2024-11-10 05:24:20.710287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.671 [2024-11-10 05:24:20.710300] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:22:27.671 [2024-11-10 05:24:20.710411] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:27.671 [2024-11-10 05:24:20.710420] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:27.671 [2024-11-10 05:24:20.710430] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:27.671 [2024-11-10 05:24:20.710438] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:27.671 [2024-11-10 05:24:20.710446] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:27.671 [2024-11-10 05:24:20.710452] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:27.671 [2024-11-10 05:24:20.710461] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:27.671 [2024-11-10 05:24:20.710466] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:27.671 [2024-11-10 05:24:20.710474] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:27.671 [2024-11-10 05:24:20.710482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.671 [2024-11-10 05:24:20.710489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:27.671 [2024-11-10 05:24:20.710494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.183 ms 00:22:27.671 [2024-11-10 05:24:20.710502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.671 [2024-11-10 05:24:20.710566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.671 [2024-11-10 05:24:20.710575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:27.671 [2024-11-10 05:24:20.710580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:22:27.671 [2024-11-10 05:24:20.710587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.671 [2024-11-10 05:24:20.710663] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:27.671 [2024-11-10 05:24:20.710673] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:27.671 [2024-11-10 05:24:20.710679] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:27.671 [2024-11-10 05:24:20.710686] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:27.671 [2024-11-10 05:24:20.710692] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:27.671 [2024-11-10 05:24:20.710698] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:27.671 [2024-11-10 05:24:20.710703] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:27.671 [2024-11-10 05:24:20.710710] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:27.671 [2024-11-10 05:24:20.710715] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:27.671 [2024-11-10 05:24:20.710721] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:27.671 [2024-11-10 05:24:20.710726] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:27.671 [2024-11-10 05:24:20.710732] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:27.671 [2024-11-10 05:24:20.710738] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:27.671 [2024-11-10 05:24:20.710747] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:27.671 [2024-11-10 05:24:20.710754] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:27.671 [2024-11-10 05:24:20.710761] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:27.671 [2024-11-10 05:24:20.710766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:27.671 [2024-11-10 05:24:20.710772] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:27.672 [2024-11-10 05:24:20.710777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:27.672 [2024-11-10 05:24:20.710783] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:27.672 [2024-11-10 05:24:20.710789] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:27.672 [2024-11-10 05:24:20.710795] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:27.672 [2024-11-10 05:24:20.710801] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:27.672 [2024-11-10 05:24:20.710808] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:27.672 [2024-11-10 05:24:20.710813] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:27.672 [2024-11-10 05:24:20.710820] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:27.672 [2024-11-10 05:24:20.710826] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:27.672 [2024-11-10 05:24:20.710833] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:27.672 [2024-11-10 05:24:20.710839] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:27.672 [2024-11-10 05:24:20.710847] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:27.672 [2024-11-10 05:24:20.710853] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:27.672 [2024-11-10 05:24:20.710860] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:27.672 [2024-11-10 05:24:20.710866] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:27.672 [2024-11-10 05:24:20.710873] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:27.672 [2024-11-10 05:24:20.710878] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:27.672 [2024-11-10 05:24:20.710885] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:27.672 [2024-11-10 05:24:20.710891] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:27.672 [2024-11-10 05:24:20.710898] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:27.672 [2024-11-10 05:24:20.710904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:27.672 [2024-11-10 05:24:20.710912] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:27.672 [2024-11-10 05:24:20.710918] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:27.672 [2024-11-10 05:24:20.710925] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:27.672 [2024-11-10 05:24:20.710930] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:27.672 [2024-11-10 05:24:20.710937] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:27.672 [2024-11-10 05:24:20.710944] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:27.672 [2024-11-10 05:24:20.710953] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:27.672 [2024-11-10 05:24:20.710959] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:27.672 [2024-11-10 05:24:20.710968] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:27.672 [2024-11-10 05:24:20.710974] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:27.672 [2024-11-10 05:24:20.710981] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:27.672 [2024-11-10 05:24:20.710987] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:27.672 [2024-11-10 05:24:20.711005] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:27.672 [2024-11-10 05:24:20.711011] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:27.672 [2024-11-10 05:24:20.711020] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:27.672 [2024-11-10 05:24:20.711029] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:27.672 [2024-11-10 05:24:20.711037] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:27.672 [2024-11-10 05:24:20.711044] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:27.672 [2024-11-10 05:24:20.711052] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:27.672 [2024-11-10 05:24:20.711058] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:27.672 [2024-11-10 05:24:20.711065] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:27.672 [2024-11-10 05:24:20.711072] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:27.672 [2024-11-10 05:24:20.711080] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:27.672 [2024-11-10 05:24:20.711086] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:27.672 [2024-11-10 05:24:20.711095] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:27.672 [2024-11-10 05:24:20.711101] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:27.672 [2024-11-10 05:24:20.711108] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:27.672 [2024-11-10 05:24:20.711115] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:27.672 [2024-11-10 05:24:20.711122] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:27.672 [2024-11-10 05:24:20.711129] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:27.672 [2024-11-10 05:24:20.711137] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:27.672 [2024-11-10 05:24:20.711145] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:27.672 [2024-11-10 05:24:20.711153] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:27.672 [2024-11-10 05:24:20.711160] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:27.672 [2024-11-10 05:24:20.711168] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:27.672 [2024-11-10 05:24:20.711174] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:27.672 [2024-11-10 05:24:20.711181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.672 [2024-11-10 05:24:20.711187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:27.672 [2024-11-10 05:24:20.711196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.569 ms 00:22:27.672 [2024-11-10 05:24:20.711203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.672 [2024-11-10 05:24:20.711232] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:22:27.672 [2024-11-10 05:24:20.711239] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:22:30.977 [2024-11-10 05:24:23.551308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.977 [2024-11-10 05:24:23.551391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:22:30.977 [2024-11-10 05:24:23.551415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2840.052 ms 00:22:30.977 [2024-11-10 05:24:23.551424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.977 [2024-11-10 05:24:23.563662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.977 [2024-11-10 05:24:23.563718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:30.977 [2024-11-10 05:24:23.563734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.128 ms 00:22:30.977 [2024-11-10 05:24:23.563742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.977 [2024-11-10 05:24:23.563856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.977 [2024-11-10 05:24:23.563867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:30.977 [2024-11-10 05:24:23.563882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:22:30.977 [2024-11-10 05:24:23.563890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.977 [2024-11-10 05:24:23.574980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.977 [2024-11-10 05:24:23.575043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:30.977 [2024-11-10 05:24:23.575058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.037 ms 00:22:30.977 [2024-11-10 05:24:23.575067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.977 [2024-11-10 05:24:23.575099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.977 [2024-11-10 05:24:23.575111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:30.977 [2024-11-10 05:24:23.575122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:30.977 [2024-11-10 05:24:23.575130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.977 [2024-11-10 05:24:23.575673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.977 [2024-11-10 05:24:23.575702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:30.977 [2024-11-10 05:24:23.575715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.481 ms 00:22:30.977 [2024-11-10 05:24:23.575724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.977 [2024-11-10 05:24:23.575847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.977 [2024-11-10 05:24:23.575856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:30.977 [2024-11-10 05:24:23.575877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:22:30.977 [2024-11-10 05:24:23.575889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.977 [2024-11-10 05:24:23.591787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.977 [2024-11-10 05:24:23.592007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:30.977 [2024-11-10 05:24:23.592035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.870 ms 00:22:30.977 [2024-11-10 05:24:23.592044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.977 [2024-11-10 05:24:23.602141] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:30.977 [2024-11-10 05:24:23.605911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.977 [2024-11-10 05:24:23.605960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:30.977 [2024-11-10 05:24:23.605972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.726 ms 00:22:30.977 [2024-11-10 05:24:23.605982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.977 [2024-11-10 05:24:23.689274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.977 [2024-11-10 05:24:23.689353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:22:30.977 [2024-11-10 05:24:23.689370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 83.241 ms 00:22:30.977 [2024-11-10 05:24:23.689385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.977 [2024-11-10 05:24:23.689599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.977 [2024-11-10 05:24:23.689614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:30.977 [2024-11-10 05:24:23.689623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.159 ms 00:22:30.977 [2024-11-10 05:24:23.689633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.977 [2024-11-10 05:24:23.696121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.977 [2024-11-10 05:24:23.696173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:22:30.977 [2024-11-10 05:24:23.696185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.467 ms 00:22:30.977 [2024-11-10 05:24:23.696196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.977 [2024-11-10 05:24:23.701123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.977 [2024-11-10 05:24:23.701176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:22:30.977 [2024-11-10 05:24:23.701187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.876 ms 00:22:30.977 [2024-11-10 05:24:23.701196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.977 [2024-11-10 05:24:23.701537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.977 [2024-11-10 05:24:23.701550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:30.977 [2024-11-10 05:24:23.701560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:22:30.977 [2024-11-10 05:24:23.701572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.977 [2024-11-10 05:24:23.745682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.977 [2024-11-10 05:24:23.745744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:22:30.977 [2024-11-10 05:24:23.745763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.088 ms 00:22:30.977 [2024-11-10 05:24:23.745775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.977 [2024-11-10 05:24:23.752937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.977 [2024-11-10 05:24:23.753017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:22:30.977 [2024-11-10 05:24:23.753029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.085 ms 00:22:30.977 [2024-11-10 05:24:23.753040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.977 [2024-11-10 05:24:23.758782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.977 [2024-11-10 05:24:23.758978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:22:30.977 [2024-11-10 05:24:23.759011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.695 ms 00:22:30.977 [2024-11-10 05:24:23.759021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.977 [2024-11-10 05:24:23.765353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.977 [2024-11-10 05:24:23.765527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:30.977 [2024-11-10 05:24:23.765640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.229 ms 00:22:30.977 [2024-11-10 05:24:23.765672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.977 [2024-11-10 05:24:23.765728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.977 [2024-11-10 05:24:23.765757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:30.977 [2024-11-10 05:24:23.765779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:30.977 [2024-11-10 05:24:23.765899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.977 [2024-11-10 05:24:23.766008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.977 [2024-11-10 05:24:23.766037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:30.977 [2024-11-10 05:24:23.766059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:22:30.977 [2024-11-10 05:24:23.766091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.977 [2024-11-10 05:24:23.767269] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3064.276 ms, result 0 00:22:30.977 { 00:22:30.977 "name": "ftl0", 00:22:30.977 "uuid": "63757667-739a-4258-a1db-5238ea6e5f8e" 00:22:30.977 } 00:22:30.977 05:24:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:22:30.977 05:24:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:22:30.977 05:24:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:22:30.977 05:24:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:22:30.977 05:24:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:22:31.238 /dev/nbd0 00:22:31.238 05:24:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:22:31.238 05:24:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:22:31.238 05:24:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@869 -- # local i 00:22:31.238 05:24:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:22:31.238 05:24:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:22:31.238 05:24:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:22:31.238 05:24:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # break 00:22:31.238 05:24:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:22:31.238 05:24:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:22:31.238 05:24:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:22:31.238 1+0 records in 00:22:31.238 1+0 records out 00:22:31.238 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000361142 s, 11.3 MB/s 00:22:31.238 05:24:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:22:31.238 05:24:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # size=4096 00:22:31.238 05:24:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:22:31.238 05:24:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:22:31.238 05:24:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # return 0 00:22:31.238 05:24:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:22:31.238 [2024-11-10 05:24:24.331640] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:22:31.238 [2024-11-10 05:24:24.331771] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89203 ] 00:22:31.498 [2024-11-10 05:24:24.482936] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:31.498 [2024-11-10 05:24:24.536058] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:32.442  [2024-11-10T05:24:26.622Z] Copying: 184/1024 [MB] (184 MBps) [2024-11-10T05:24:28.005Z] Copying: 373/1024 [MB] (188 MBps) [2024-11-10T05:24:28.938Z] Copying: 567/1024 [MB] (194 MBps) [2024-11-10T05:24:29.873Z] Copying: 792/1024 [MB] (224 MBps) [2024-11-10T05:24:29.873Z] Copying: 1024/1024 [MB] (average 208 MBps) 00:22:36.637 00:22:36.637 05:24:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:38.542 05:24:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:22:38.800 [2024-11-10 05:24:31.784444] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:22:38.801 [2024-11-10 05:24:31.784545] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89286 ] 00:22:38.801 [2024-11-10 05:24:31.923132] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:38.801 [2024-11-10 05:24:31.964672] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:40.182  [2024-11-10T05:24:34.360Z] Copying: 16/1024 [MB] (16 MBps) [2024-11-10T05:24:35.294Z] Copying: 39/1024 [MB] (23 MBps) [2024-11-10T05:24:36.229Z] Copying: 64/1024 [MB] (24 MBps) [2024-11-10T05:24:37.164Z] Copying: 81/1024 [MB] (17 MBps) [2024-11-10T05:24:38.100Z] Copying: 98/1024 [MB] (16 MBps) [2024-11-10T05:24:39.034Z] Copying: 117/1024 [MB] (19 MBps) [2024-11-10T05:24:40.409Z] Copying: 133/1024 [MB] (16 MBps) [2024-11-10T05:24:41.341Z] Copying: 150/1024 [MB] (16 MBps) [2024-11-10T05:24:42.275Z] Copying: 162/1024 [MB] (12 MBps) [2024-11-10T05:24:43.208Z] Copying: 179/1024 [MB] (16 MBps) [2024-11-10T05:24:44.142Z] Copying: 194/1024 [MB] (14 MBps) [2024-11-10T05:24:45.076Z] Copying: 206/1024 [MB] (12 MBps) [2024-11-10T05:24:46.451Z] Copying: 218/1024 [MB] (11 MBps) [2024-11-10T05:24:47.384Z] Copying: 243/1024 [MB] (25 MBps) [2024-11-10T05:24:48.318Z] Copying: 258/1024 [MB] (15 MBps) [2024-11-10T05:24:49.252Z] Copying: 273/1024 [MB] (14 MBps) [2024-11-10T05:24:50.184Z] Copying: 285/1024 [MB] (12 MBps) [2024-11-10T05:24:51.118Z] Copying: 301/1024 [MB] (15 MBps) [2024-11-10T05:24:52.053Z] Copying: 317/1024 [MB] (16 MBps) [2024-11-10T05:24:53.430Z] Copying: 333/1024 [MB] (15 MBps) [2024-11-10T05:24:54.363Z] Copying: 351/1024 [MB] (18 MBps) [2024-11-10T05:24:55.297Z] Copying: 370/1024 [MB] (18 MBps) [2024-11-10T05:24:56.229Z] Copying: 387/1024 [MB] (17 MBps) [2024-11-10T05:24:57.162Z] Copying: 405/1024 [MB] (18 MBps) [2024-11-10T05:24:58.096Z] Copying: 421/1024 [MB] (15 MBps) [2024-11-10T05:24:59.470Z] Copying: 435/1024 [MB] (14 MBps) [2024-11-10T05:25:00.035Z] Copying: 451/1024 [MB] (15 MBps) [2024-11-10T05:25:01.108Z] Copying: 465/1024 [MB] (14 MBps) [2024-11-10T05:25:02.042Z] Copying: 481/1024 [MB] (15 MBps) [2024-11-10T05:25:03.415Z] Copying: 493/1024 [MB] (12 MBps) [2024-11-10T05:25:04.350Z] Copying: 505/1024 [MB] (11 MBps) [2024-11-10T05:25:05.283Z] Copying: 520/1024 [MB] (15 MBps) [2024-11-10T05:25:06.216Z] Copying: 532/1024 [MB] (11 MBps) [2024-11-10T05:25:07.150Z] Copying: 552/1024 [MB] (19 MBps) [2024-11-10T05:25:08.084Z] Copying: 570/1024 [MB] (17 MBps) [2024-11-10T05:25:09.457Z] Copying: 581/1024 [MB] (11 MBps) [2024-11-10T05:25:10.389Z] Copying: 598/1024 [MB] (16 MBps) [2024-11-10T05:25:11.323Z] Copying: 614/1024 [MB] (16 MBps) [2024-11-10T05:25:12.258Z] Copying: 631/1024 [MB] (16 MBps) [2024-11-10T05:25:13.192Z] Copying: 648/1024 [MB] (17 MBps) [2024-11-10T05:25:14.124Z] Copying: 667/1024 [MB] (18 MBps) [2024-11-10T05:25:15.058Z] Copying: 682/1024 [MB] (14 MBps) [2024-11-10T05:25:16.430Z] Copying: 697/1024 [MB] (15 MBps) [2024-11-10T05:25:17.364Z] Copying: 716/1024 [MB] (19 MBps) [2024-11-10T05:25:18.298Z] Copying: 729/1024 [MB] (13 MBps) [2024-11-10T05:25:19.230Z] Copying: 748/1024 [MB] (18 MBps) [2024-11-10T05:25:20.182Z] Copying: 764/1024 [MB] (15 MBps) [2024-11-10T05:25:21.115Z] Copying: 780/1024 [MB] (16 MBps) [2024-11-10T05:25:22.046Z] Copying: 800/1024 [MB] (20 MBps) [2024-11-10T05:25:23.418Z] Copying: 817/1024 [MB] (16 MBps) [2024-11-10T05:25:24.351Z] Copying: 835/1024 [MB] (18 MBps) [2024-11-10T05:25:25.284Z] Copying: 854/1024 [MB] (18 MBps) [2024-11-10T05:25:26.260Z] Copying: 873/1024 [MB] (19 MBps) [2024-11-10T05:25:27.203Z] Copying: 892/1024 [MB] (18 MBps) [2024-11-10T05:25:28.135Z] Copying: 908/1024 [MB] (15 MBps) [2024-11-10T05:25:29.070Z] Copying: 927/1024 [MB] (19 MBps) [2024-11-10T05:25:30.444Z] Copying: 946/1024 [MB] (18 MBps) [2024-11-10T05:25:31.378Z] Copying: 964/1024 [MB] (18 MBps) [2024-11-10T05:25:32.312Z] Copying: 982/1024 [MB] (17 MBps) [2024-11-10T05:25:33.246Z] Copying: 997/1024 [MB] (15 MBps) [2024-11-10T05:25:33.813Z] Copying: 1014/1024 [MB] (16 MBps) [2024-11-10T05:25:33.813Z] Copying: 1024/1024 [MB] (average 16 MBps) 00:23:40.577 00:23:40.577 05:25:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:23:40.577 05:25:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:23:40.838 05:25:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:23:41.099 [2024-11-10 05:25:34.140490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.099 [2024-11-10 05:25:34.140533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:41.099 [2024-11-10 05:25:34.140548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:41.099 [2024-11-10 05:25:34.140556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.099 [2024-11-10 05:25:34.140582] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:41.099 [2024-11-10 05:25:34.141043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.099 [2024-11-10 05:25:34.141068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:41.099 [2024-11-10 05:25:34.141084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.447 ms 00:23:41.099 [2024-11-10 05:25:34.141096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.099 [2024-11-10 05:25:34.143118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.099 [2024-11-10 05:25:34.143154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:41.099 [2024-11-10 05:25:34.143165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.000 ms 00:23:41.099 [2024-11-10 05:25:34.143174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.099 [2024-11-10 05:25:34.157279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.099 [2024-11-10 05:25:34.157315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:41.099 [2024-11-10 05:25:34.157325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.089 ms 00:23:41.099 [2024-11-10 05:25:34.157335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.099 [2024-11-10 05:25:34.163510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.099 [2024-11-10 05:25:34.163540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:41.099 [2024-11-10 05:25:34.163550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.143 ms 00:23:41.099 [2024-11-10 05:25:34.163559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.099 [2024-11-10 05:25:34.165122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.099 [2024-11-10 05:25:34.165159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:41.099 [2024-11-10 05:25:34.165168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.505 ms 00:23:41.099 [2024-11-10 05:25:34.165176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.099 [2024-11-10 05:25:34.169326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.100 [2024-11-10 05:25:34.169361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:41.100 [2024-11-10 05:25:34.169373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.118 ms 00:23:41.100 [2024-11-10 05:25:34.169384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.100 [2024-11-10 05:25:34.169502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.100 [2024-11-10 05:25:34.169513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:41.100 [2024-11-10 05:25:34.169521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:23:41.100 [2024-11-10 05:25:34.169531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.100 [2024-11-10 05:25:34.171341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.100 [2024-11-10 05:25:34.171376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:41.100 [2024-11-10 05:25:34.171385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.795 ms 00:23:41.100 [2024-11-10 05:25:34.171395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.100 [2024-11-10 05:25:34.172669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.100 [2024-11-10 05:25:34.172709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:41.100 [2024-11-10 05:25:34.172717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.242 ms 00:23:41.100 [2024-11-10 05:25:34.172726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.100 [2024-11-10 05:25:34.174067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.100 [2024-11-10 05:25:34.174210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:41.100 [2024-11-10 05:25:34.174225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.307 ms 00:23:41.100 [2024-11-10 05:25:34.174233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.100 [2024-11-10 05:25:34.175303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.100 [2024-11-10 05:25:34.175338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:41.100 [2024-11-10 05:25:34.175346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.016 ms 00:23:41.100 [2024-11-10 05:25:34.175355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.100 [2024-11-10 05:25:34.175384] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:41.100 [2024-11-10 05:25:34.175400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:41.100 [2024-11-10 05:25:34.175409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:41.100 [2024-11-10 05:25:34.175419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:41.100 [2024-11-10 05:25:34.175426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:41.100 [2024-11-10 05:25:34.175440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:41.100 [2024-11-10 05:25:34.175448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:41.100 [2024-11-10 05:25:34.175457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:41.100 [2024-11-10 05:25:34.175465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:41.100 [2024-11-10 05:25:34.175474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:41.100 [2024-11-10 05:25:34.175481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:41.100 [2024-11-10 05:25:34.175490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:41.100 [2024-11-10 05:25:34.175498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:41.100 [2024-11-10 05:25:34.175507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:41.100 [2024-11-10 05:25:34.175514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:41.100 [2024-11-10 05:25:34.175523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:41.100 [2024-11-10 05:25:34.175530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:41.100 [2024-11-10 05:25:34.175539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:41.100 [2024-11-10 05:25:34.175546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:41.100 [2024-11-10 05:25:34.175555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:41.100 [2024-11-10 05:25:34.175563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:41.100 [2024-11-10 05:25:34.175573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:41.100 [2024-11-10 05:25:34.175581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:41.100 [2024-11-10 05:25:34.175590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:41.100 [2024-11-10 05:25:34.175597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:41.100 [2024-11-10 05:25:34.175606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:41.100 [2024-11-10 05:25:34.175613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:41.100 [2024-11-10 05:25:34.175622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:41.100 [2024-11-10 05:25:34.175630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:41.100 [2024-11-10 05:25:34.175639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:41.100 [2024-11-10 05:25:34.175646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:41.100 [2024-11-10 05:25:34.175656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:41.100 [2024-11-10 05:25:34.175664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:41.100 [2024-11-10 05:25:34.175673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:41.100 [2024-11-10 05:25:34.175680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:41.100 [2024-11-10 05:25:34.175690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:41.100 [2024-11-10 05:25:34.175698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:41.100 [2024-11-10 05:25:34.175708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:41.100 [2024-11-10 05:25:34.175716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:41.100 [2024-11-10 05:25:34.175725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:41.100 [2024-11-10 05:25:34.175732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:41.100 [2024-11-10 05:25:34.175741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:41.100 [2024-11-10 05:25:34.175748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:41.100 [2024-11-10 05:25:34.175757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:41.100 [2024-11-10 05:25:34.175765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:41.100 [2024-11-10 05:25:34.175773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:41.100 [2024-11-10 05:25:34.175781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:41.100 [2024-11-10 05:25:34.175789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:41.100 [2024-11-10 05:25:34.175796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:41.100 [2024-11-10 05:25:34.175805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:41.100 [2024-11-10 05:25:34.175813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:41.100 [2024-11-10 05:25:34.175822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:41.100 [2024-11-10 05:25:34.175829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:41.100 [2024-11-10 05:25:34.175840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:41.100 [2024-11-10 05:25:34.175847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:41.100 [2024-11-10 05:25:34.175856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:41.100 [2024-11-10 05:25:34.175872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:41.100 [2024-11-10 05:25:34.175880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:41.100 [2024-11-10 05:25:34.175888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:41.100 [2024-11-10 05:25:34.175896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:41.100 [2024-11-10 05:25:34.175904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:41.100 [2024-11-10 05:25:34.175912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:41.100 [2024-11-10 05:25:34.175920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:41.100 [2024-11-10 05:25:34.175928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:41.100 [2024-11-10 05:25:34.175935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:41.100 [2024-11-10 05:25:34.175944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:41.100 [2024-11-10 05:25:34.175951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:41.100 [2024-11-10 05:25:34.175963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:41.100 [2024-11-10 05:25:34.175970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:41.100 [2024-11-10 05:25:34.175980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:41.100 [2024-11-10 05:25:34.175998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:41.100 [2024-11-10 05:25:34.176008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:41.101 [2024-11-10 05:25:34.176015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:41.101 [2024-11-10 05:25:34.176024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:41.101 [2024-11-10 05:25:34.176032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:41.101 [2024-11-10 05:25:34.176041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:41.101 [2024-11-10 05:25:34.176048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:41.101 [2024-11-10 05:25:34.176057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:41.101 [2024-11-10 05:25:34.176064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:41.101 [2024-11-10 05:25:34.176082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:41.101 [2024-11-10 05:25:34.176090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:41.101 [2024-11-10 05:25:34.176099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:41.101 [2024-11-10 05:25:34.176107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:41.101 [2024-11-10 05:25:34.176117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:41.101 [2024-11-10 05:25:34.176124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:41.101 [2024-11-10 05:25:34.176136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:41.101 [2024-11-10 05:25:34.176144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:41.101 [2024-11-10 05:25:34.176153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:41.101 [2024-11-10 05:25:34.176161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:41.101 [2024-11-10 05:25:34.176169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:41.101 [2024-11-10 05:25:34.176177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:41.101 [2024-11-10 05:25:34.176186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:41.101 [2024-11-10 05:25:34.176193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:41.101 [2024-11-10 05:25:34.176202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:41.101 [2024-11-10 05:25:34.176209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:41.101 [2024-11-10 05:25:34.176218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:41.101 [2024-11-10 05:25:34.176226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:41.101 [2024-11-10 05:25:34.176239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:41.101 [2024-11-10 05:25:34.176246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:41.101 [2024-11-10 05:25:34.176257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:41.101 [2024-11-10 05:25:34.176264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:41.101 [2024-11-10 05:25:34.176282] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:41.101 [2024-11-10 05:25:34.176290] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 63757667-739a-4258-a1db-5238ea6e5f8e 00:23:41.101 [2024-11-10 05:25:34.176302] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:41.101 [2024-11-10 05:25:34.176309] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:41.101 [2024-11-10 05:25:34.176318] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:41.101 [2024-11-10 05:25:34.176325] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:41.101 [2024-11-10 05:25:34.176339] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:41.101 [2024-11-10 05:25:34.176346] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:41.101 [2024-11-10 05:25:34.176355] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:41.101 [2024-11-10 05:25:34.176361] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:41.101 [2024-11-10 05:25:34.176369] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:41.101 [2024-11-10 05:25:34.176376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.101 [2024-11-10 05:25:34.176385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:41.101 [2024-11-10 05:25:34.176393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.993 ms 00:23:41.101 [2024-11-10 05:25:34.176402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.101 [2024-11-10 05:25:34.177818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.101 [2024-11-10 05:25:34.177839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:41.101 [2024-11-10 05:25:34.177848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.400 ms 00:23:41.101 [2024-11-10 05:25:34.177857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.101 [2024-11-10 05:25:34.177942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.101 [2024-11-10 05:25:34.177953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:41.101 [2024-11-10 05:25:34.177961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:23:41.101 [2024-11-10 05:25:34.177970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.101 [2024-11-10 05:25:34.183606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:41.101 [2024-11-10 05:25:34.183735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:41.101 [2024-11-10 05:25:34.183789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:41.101 [2024-11-10 05:25:34.183814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.101 [2024-11-10 05:25:34.183877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:41.101 [2024-11-10 05:25:34.183905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:41.101 [2024-11-10 05:25:34.183925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:41.101 [2024-11-10 05:25:34.183944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.101 [2024-11-10 05:25:34.184066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:41.101 [2024-11-10 05:25:34.184120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:41.101 [2024-11-10 05:25:34.184141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:41.101 [2024-11-10 05:25:34.184161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.101 [2024-11-10 05:25:34.184189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:41.101 [2024-11-10 05:25:34.184331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:41.101 [2024-11-10 05:25:34.184354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:41.101 [2024-11-10 05:25:34.184375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.101 [2024-11-10 05:25:34.192608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:41.101 [2024-11-10 05:25:34.192743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:41.101 [2024-11-10 05:25:34.192791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:41.101 [2024-11-10 05:25:34.192814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.101 [2024-11-10 05:25:34.200246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:41.101 [2024-11-10 05:25:34.200366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:41.101 [2024-11-10 05:25:34.200413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:41.101 [2024-11-10 05:25:34.200437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.101 [2024-11-10 05:25:34.200511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:41.101 [2024-11-10 05:25:34.200539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:41.101 [2024-11-10 05:25:34.200558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:41.101 [2024-11-10 05:25:34.200579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.101 [2024-11-10 05:25:34.200625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:41.101 [2024-11-10 05:25:34.200688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:41.101 [2024-11-10 05:25:34.200710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:41.101 [2024-11-10 05:25:34.200731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.101 [2024-11-10 05:25:34.200818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:41.101 [2024-11-10 05:25:34.200889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:41.101 [2024-11-10 05:25:34.200909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:41.101 [2024-11-10 05:25:34.200929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.101 [2024-11-10 05:25:34.201066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:41.101 [2024-11-10 05:25:34.201124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:41.101 [2024-11-10 05:25:34.201170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:41.101 [2024-11-10 05:25:34.201214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.101 [2024-11-10 05:25:34.201268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:41.101 [2024-11-10 05:25:34.201318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:41.101 [2024-11-10 05:25:34.201340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:41.101 [2024-11-10 05:25:34.201460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.101 [2024-11-10 05:25:34.201545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:41.101 [2024-11-10 05:25:34.201597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:41.101 [2024-11-10 05:25:34.201628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:41.101 [2024-11-10 05:25:34.201649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.101 [2024-11-10 05:25:34.202109] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 61.583 ms, result 0 00:23:41.101 true 00:23:41.101 05:25:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 89066 00:23:41.101 05:25:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid89066 00:23:41.102 05:25:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:23:41.102 [2024-11-10 05:25:34.292261] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:41.102 [2024-11-10 05:25:34.292379] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89939 ] 00:23:41.362 [2024-11-10 05:25:34.440614] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:41.362 [2024-11-10 05:25:34.472901] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:42.304  [2024-11-10T05:25:36.924Z] Copying: 190/1024 [MB] (190 MBps) [2024-11-10T05:25:37.866Z] Copying: 401/1024 [MB] (210 MBps) [2024-11-10T05:25:38.809Z] Copying: 661/1024 [MB] (260 MBps) [2024-11-10T05:25:39.070Z] Copying: 915/1024 [MB] (254 MBps) [2024-11-10T05:25:39.331Z] Copying: 1024/1024 [MB] (average 231 MBps) 00:23:46.095 00:23:46.095 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 89066 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:23:46.095 05:25:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:46.095 [2024-11-10 05:25:39.167914] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:46.095 [2024-11-10 05:25:39.168063] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89994 ] 00:23:46.095 [2024-11-10 05:25:39.314421] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:46.355 [2024-11-10 05:25:39.357144] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:46.355 [2024-11-10 05:25:39.441906] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:46.355 [2024-11-10 05:25:39.441957] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:46.355 [2024-11-10 05:25:39.503853] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:23:46.356 [2024-11-10 05:25:39.504128] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:23:46.356 [2024-11-10 05:25:39.504356] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:23:46.618 [2024-11-10 05:25:39.682617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.618 [2024-11-10 05:25:39.682652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:46.618 [2024-11-10 05:25:39.682662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:46.618 [2024-11-10 05:25:39.682668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.618 [2024-11-10 05:25:39.682703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.618 [2024-11-10 05:25:39.682713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:46.618 [2024-11-10 05:25:39.682721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:23:46.618 [2024-11-10 05:25:39.682727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.618 [2024-11-10 05:25:39.682742] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:46.618 [2024-11-10 05:25:39.682920] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:46.618 [2024-11-10 05:25:39.682931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.618 [2024-11-10 05:25:39.682936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:46.618 [2024-11-10 05:25:39.682942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.195 ms 00:23:46.618 [2024-11-10 05:25:39.682949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.618 [2024-11-10 05:25:39.683886] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:46.618 [2024-11-10 05:25:39.685877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.618 [2024-11-10 05:25:39.685912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:46.618 [2024-11-10 05:25:39.685920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.992 ms 00:23:46.618 [2024-11-10 05:25:39.685926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.618 [2024-11-10 05:25:39.685969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.618 [2024-11-10 05:25:39.685976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:46.618 [2024-11-10 05:25:39.685983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:23:46.618 [2024-11-10 05:25:39.685988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.618 [2024-11-10 05:25:39.690360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.618 [2024-11-10 05:25:39.690517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:46.618 [2024-11-10 05:25:39.690530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.329 ms 00:23:46.618 [2024-11-10 05:25:39.690537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.618 [2024-11-10 05:25:39.690602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.618 [2024-11-10 05:25:39.690609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:46.618 [2024-11-10 05:25:39.690615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:23:46.618 [2024-11-10 05:25:39.690621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.618 [2024-11-10 05:25:39.690655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.618 [2024-11-10 05:25:39.690665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:46.618 [2024-11-10 05:25:39.690672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:46.618 [2024-11-10 05:25:39.690678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.618 [2024-11-10 05:25:39.690694] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:46.618 [2024-11-10 05:25:39.691830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.618 [2024-11-10 05:25:39.691855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:46.618 [2024-11-10 05:25:39.691862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.140 ms 00:23:46.618 [2024-11-10 05:25:39.691867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.618 [2024-11-10 05:25:39.691892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.618 [2024-11-10 05:25:39.691899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:46.618 [2024-11-10 05:25:39.691908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:46.618 [2024-11-10 05:25:39.691914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.618 [2024-11-10 05:25:39.691928] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:46.618 [2024-11-10 05:25:39.691944] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:46.618 [2024-11-10 05:25:39.691972] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:46.618 [2024-11-10 05:25:39.691985] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:46.618 [2024-11-10 05:25:39.692084] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:46.618 [2024-11-10 05:25:39.692093] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:46.618 [2024-11-10 05:25:39.692102] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:46.618 [2024-11-10 05:25:39.692109] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:46.618 [2024-11-10 05:25:39.692116] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:46.618 [2024-11-10 05:25:39.692122] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:46.618 [2024-11-10 05:25:39.692127] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:46.618 [2024-11-10 05:25:39.692133] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:46.618 [2024-11-10 05:25:39.692138] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:46.618 [2024-11-10 05:25:39.692144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.618 [2024-11-10 05:25:39.692152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:46.618 [2024-11-10 05:25:39.692157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.217 ms 00:23:46.618 [2024-11-10 05:25:39.692162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.618 [2024-11-10 05:25:39.692225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.618 [2024-11-10 05:25:39.692231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:46.618 [2024-11-10 05:25:39.692241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:23:46.618 [2024-11-10 05:25:39.692250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.618 [2024-11-10 05:25:39.692321] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:46.618 [2024-11-10 05:25:39.692328] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:46.618 [2024-11-10 05:25:39.692336] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:46.618 [2024-11-10 05:25:39.692342] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:46.618 [2024-11-10 05:25:39.692347] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:46.618 [2024-11-10 05:25:39.692352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:46.618 [2024-11-10 05:25:39.692357] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:46.618 [2024-11-10 05:25:39.692362] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:46.618 [2024-11-10 05:25:39.692367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:46.618 [2024-11-10 05:25:39.692372] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:46.618 [2024-11-10 05:25:39.692377] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:46.618 [2024-11-10 05:25:39.692382] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:46.618 [2024-11-10 05:25:39.692387] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:46.618 [2024-11-10 05:25:39.692392] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:46.618 [2024-11-10 05:25:39.692401] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:46.618 [2024-11-10 05:25:39.692406] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:46.618 [2024-11-10 05:25:39.692411] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:46.618 [2024-11-10 05:25:39.692416] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:46.618 [2024-11-10 05:25:39.692421] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:46.618 [2024-11-10 05:25:39.692426] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:46.618 [2024-11-10 05:25:39.692431] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:46.619 [2024-11-10 05:25:39.692436] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:46.619 [2024-11-10 05:25:39.692441] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:46.619 [2024-11-10 05:25:39.692446] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:46.619 [2024-11-10 05:25:39.692451] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:46.619 [2024-11-10 05:25:39.692456] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:46.619 [2024-11-10 05:25:39.692461] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:46.619 [2024-11-10 05:25:39.692466] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:46.619 [2024-11-10 05:25:39.692470] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:46.619 [2024-11-10 05:25:39.692475] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:46.619 [2024-11-10 05:25:39.692486] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:46.619 [2024-11-10 05:25:39.692492] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:46.619 [2024-11-10 05:25:39.692498] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:46.619 [2024-11-10 05:25:39.692503] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:46.619 [2024-11-10 05:25:39.692509] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:46.619 [2024-11-10 05:25:39.692515] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:46.619 [2024-11-10 05:25:39.692520] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:46.619 [2024-11-10 05:25:39.692526] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:46.619 [2024-11-10 05:25:39.692532] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:46.619 [2024-11-10 05:25:39.692538] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:46.619 [2024-11-10 05:25:39.692544] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:46.619 [2024-11-10 05:25:39.692550] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:46.619 [2024-11-10 05:25:39.692556] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:46.619 [2024-11-10 05:25:39.692562] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:46.619 [2024-11-10 05:25:39.692568] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:46.619 [2024-11-10 05:25:39.692574] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:46.619 [2024-11-10 05:25:39.692581] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:46.619 [2024-11-10 05:25:39.692588] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:46.619 [2024-11-10 05:25:39.692593] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:46.619 [2024-11-10 05:25:39.692599] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:46.619 [2024-11-10 05:25:39.692605] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:46.619 [2024-11-10 05:25:39.692610] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:46.619 [2024-11-10 05:25:39.692615] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:46.619 [2024-11-10 05:25:39.692620] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:46.619 [2024-11-10 05:25:39.692627] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:46.619 [2024-11-10 05:25:39.692633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:46.619 [2024-11-10 05:25:39.692639] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:46.619 [2024-11-10 05:25:39.692644] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:46.619 [2024-11-10 05:25:39.692649] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:46.619 [2024-11-10 05:25:39.692654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:46.619 [2024-11-10 05:25:39.692659] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:46.619 [2024-11-10 05:25:39.692664] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:46.619 [2024-11-10 05:25:39.692670] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:46.619 [2024-11-10 05:25:39.692676] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:46.619 [2024-11-10 05:25:39.692682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:46.619 [2024-11-10 05:25:39.692687] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:46.619 [2024-11-10 05:25:39.692692] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:46.619 [2024-11-10 05:25:39.692696] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:46.619 [2024-11-10 05:25:39.692702] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:46.619 [2024-11-10 05:25:39.692707] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:46.619 [2024-11-10 05:25:39.692712] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:46.619 [2024-11-10 05:25:39.692718] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:46.619 [2024-11-10 05:25:39.692723] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:46.619 [2024-11-10 05:25:39.692728] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:46.619 [2024-11-10 05:25:39.692736] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:46.619 [2024-11-10 05:25:39.692742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.619 [2024-11-10 05:25:39.692749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:46.619 [2024-11-10 05:25:39.692755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.473 ms 00:23:46.619 [2024-11-10 05:25:39.692761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.619 [2024-11-10 05:25:39.707767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.619 [2024-11-10 05:25:39.707794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:46.619 [2024-11-10 05:25:39.707803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.975 ms 00:23:46.619 [2024-11-10 05:25:39.707809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.619 [2024-11-10 05:25:39.707882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.619 [2024-11-10 05:25:39.707890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:46.619 [2024-11-10 05:25:39.707898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:23:46.619 [2024-11-10 05:25:39.707903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.619 [2024-11-10 05:25:39.715562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.619 [2024-11-10 05:25:39.715715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:46.619 [2024-11-10 05:25:39.715731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.614 ms 00:23:46.619 [2024-11-10 05:25:39.715739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.619 [2024-11-10 05:25:39.715772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.619 [2024-11-10 05:25:39.715786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:46.619 [2024-11-10 05:25:39.715795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:46.619 [2024-11-10 05:25:39.715803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.619 [2024-11-10 05:25:39.716158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.619 [2024-11-10 05:25:39.716180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:46.619 [2024-11-10 05:25:39.716190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.321 ms 00:23:46.619 [2024-11-10 05:25:39.716200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.619 [2024-11-10 05:25:39.716327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.619 [2024-11-10 05:25:39.716338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:46.619 [2024-11-10 05:25:39.716350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:23:46.619 [2024-11-10 05:25:39.716358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.619 [2024-11-10 05:25:39.720960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.619 [2024-11-10 05:25:39.721025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:46.619 [2024-11-10 05:25:39.721044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.578 ms 00:23:46.619 [2024-11-10 05:25:39.721053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.619 [2024-11-10 05:25:39.723330] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:46.619 [2024-11-10 05:25:39.723370] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:46.619 [2024-11-10 05:25:39.723385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.619 [2024-11-10 05:25:39.723394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:46.619 [2024-11-10 05:25:39.723406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.235 ms 00:23:46.619 [2024-11-10 05:25:39.723413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.619 [2024-11-10 05:25:39.734576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.619 [2024-11-10 05:25:39.734680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:46.619 [2024-11-10 05:25:39.734693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.125 ms 00:23:46.619 [2024-11-10 05:25:39.734699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.619 [2024-11-10 05:25:39.736263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.619 [2024-11-10 05:25:39.736290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:46.619 [2024-11-10 05:25:39.736297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.538 ms 00:23:46.619 [2024-11-10 05:25:39.736302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.619 [2024-11-10 05:25:39.737510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.619 [2024-11-10 05:25:39.737536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:46.620 [2024-11-10 05:25:39.737543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.181 ms 00:23:46.620 [2024-11-10 05:25:39.737549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.620 [2024-11-10 05:25:39.737799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.620 [2024-11-10 05:25:39.737813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:46.620 [2024-11-10 05:25:39.737823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.204 ms 00:23:46.620 [2024-11-10 05:25:39.737830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.620 [2024-11-10 05:25:39.751806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.620 [2024-11-10 05:25:39.751849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:46.620 [2024-11-10 05:25:39.751861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.963 ms 00:23:46.620 [2024-11-10 05:25:39.751868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.620 [2024-11-10 05:25:39.757617] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:46.620 [2024-11-10 05:25:39.759632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.620 [2024-11-10 05:25:39.759742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:46.620 [2024-11-10 05:25:39.759756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.730 ms 00:23:46.620 [2024-11-10 05:25:39.759763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.620 [2024-11-10 05:25:39.759811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.620 [2024-11-10 05:25:39.759819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:46.620 [2024-11-10 05:25:39.759826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:46.620 [2024-11-10 05:25:39.759832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.620 [2024-11-10 05:25:39.759890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.620 [2024-11-10 05:25:39.759897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:46.620 [2024-11-10 05:25:39.759904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:23:46.620 [2024-11-10 05:25:39.759914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.620 [2024-11-10 05:25:39.759928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.620 [2024-11-10 05:25:39.759935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:46.620 [2024-11-10 05:25:39.759944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:46.620 [2024-11-10 05:25:39.759951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.620 [2024-11-10 05:25:39.759975] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:46.620 [2024-11-10 05:25:39.759984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.620 [2024-11-10 05:25:39.760027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:46.620 [2024-11-10 05:25:39.760034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:46.620 [2024-11-10 05:25:39.760039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.620 [2024-11-10 05:25:39.762811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.620 [2024-11-10 05:25:39.762913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:46.620 [2024-11-10 05:25:39.762925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.758 ms 00:23:46.620 [2024-11-10 05:25:39.762932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.620 [2024-11-10 05:25:39.762984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.620 [2024-11-10 05:25:39.763007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:46.620 [2024-11-10 05:25:39.763014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:23:46.620 [2024-11-10 05:25:39.763019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.620 [2024-11-10 05:25:39.764093] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 81.074 ms, result 0 00:23:47.560  [2024-11-10T05:25:42.182Z] Copying: 37/1024 [MB] (37 MBps) [2024-11-10T05:25:43.127Z] Copying: 72/1024 [MB] (35 MBps) [2024-11-10T05:25:44.070Z] Copying: 95/1024 [MB] (22 MBps) [2024-11-10T05:25:45.014Z] Copying: 129/1024 [MB] (33 MBps) [2024-11-10T05:25:45.958Z] Copying: 154/1024 [MB] (25 MBps) [2024-11-10T05:25:46.902Z] Copying: 173/1024 [MB] (19 MBps) [2024-11-10T05:25:47.846Z] Copying: 187/1024 [MB] (13 MBps) [2024-11-10T05:25:48.790Z] Copying: 208/1024 [MB] (21 MBps) [2024-11-10T05:25:50.180Z] Copying: 223/1024 [MB] (14 MBps) [2024-11-10T05:25:51.124Z] Copying: 242/1024 [MB] (18 MBps) [2024-11-10T05:25:52.067Z] Copying: 255/1024 [MB] (13 MBps) [2024-11-10T05:25:53.012Z] Copying: 268/1024 [MB] (12 MBps) [2024-11-10T05:25:53.956Z] Copying: 278/1024 [MB] (10 MBps) [2024-11-10T05:25:54.901Z] Copying: 295/1024 [MB] (16 MBps) [2024-11-10T05:25:55.844Z] Copying: 309/1024 [MB] (14 MBps) [2024-11-10T05:25:56.787Z] Copying: 323/1024 [MB] (13 MBps) [2024-11-10T05:25:58.212Z] Copying: 344/1024 [MB] (20 MBps) [2024-11-10T05:25:58.785Z] Copying: 363/1024 [MB] (18 MBps) [2024-11-10T05:26:00.173Z] Copying: 373/1024 [MB] (10 MBps) [2024-11-10T05:26:01.117Z] Copying: 384/1024 [MB] (10 MBps) [2024-11-10T05:26:02.059Z] Copying: 395/1024 [MB] (10 MBps) [2024-11-10T05:26:03.002Z] Copying: 405/1024 [MB] (10 MBps) [2024-11-10T05:26:03.944Z] Copying: 416/1024 [MB] (10 MBps) [2024-11-10T05:26:04.888Z] Copying: 426/1024 [MB] (10 MBps) [2024-11-10T05:26:05.831Z] Copying: 437/1024 [MB] (10 MBps) [2024-11-10T05:26:07.217Z] Copying: 447/1024 [MB] (10 MBps) [2024-11-10T05:26:07.790Z] Copying: 458/1024 [MB] (10 MBps) [2024-11-10T05:26:09.177Z] Copying: 469/1024 [MB] (10 MBps) [2024-11-10T05:26:10.128Z] Copying: 480/1024 [MB] (10 MBps) [2024-11-10T05:26:11.073Z] Copying: 491/1024 [MB] (11 MBps) [2024-11-10T05:26:12.016Z] Copying: 501/1024 [MB] (10 MBps) [2024-11-10T05:26:12.959Z] Copying: 512/1024 [MB] (10 MBps) [2024-11-10T05:26:13.906Z] Copying: 526/1024 [MB] (14 MBps) [2024-11-10T05:26:14.888Z] Copying: 548/1024 [MB] (22 MBps) [2024-11-10T05:26:15.830Z] Copying: 559/1024 [MB] (10 MBps) [2024-11-10T05:26:17.216Z] Copying: 577/1024 [MB] (18 MBps) [2024-11-10T05:26:17.788Z] Copying: 588/1024 [MB] (10 MBps) [2024-11-10T05:26:19.174Z] Copying: 616/1024 [MB] (28 MBps) [2024-11-10T05:26:20.116Z] Copying: 651/1024 [MB] (34 MBps) [2024-11-10T05:26:21.058Z] Copying: 661/1024 [MB] (10 MBps) [2024-11-10T05:26:22.002Z] Copying: 672/1024 [MB] (11 MBps) [2024-11-10T05:26:22.947Z] Copying: 687/1024 [MB] (14 MBps) [2024-11-10T05:26:23.891Z] Copying: 701/1024 [MB] (14 MBps) [2024-11-10T05:26:24.835Z] Copying: 711/1024 [MB] (10 MBps) [2024-11-10T05:26:25.779Z] Copying: 727/1024 [MB] (15 MBps) [2024-11-10T05:26:27.169Z] Copying: 738/1024 [MB] (11 MBps) [2024-11-10T05:26:28.111Z] Copying: 756/1024 [MB] (17 MBps) [2024-11-10T05:26:29.056Z] Copying: 767/1024 [MB] (10 MBps) [2024-11-10T05:26:30.000Z] Copying: 777/1024 [MB] (10 MBps) [2024-11-10T05:26:30.940Z] Copying: 791/1024 [MB] (13 MBps) [2024-11-10T05:26:31.922Z] Copying: 812/1024 [MB] (21 MBps) [2024-11-10T05:26:32.866Z] Copying: 829/1024 [MB] (16 MBps) [2024-11-10T05:26:33.810Z] Copying: 845/1024 [MB] (16 MBps) [2024-11-10T05:26:35.198Z] Copying: 857/1024 [MB] (11 MBps) [2024-11-10T05:26:36.142Z] Copying: 867/1024 [MB] (10 MBps) [2024-11-10T05:26:37.087Z] Copying: 878/1024 [MB] (10 MBps) [2024-11-10T05:26:38.031Z] Copying: 889/1024 [MB] (10 MBps) [2024-11-10T05:26:38.976Z] Copying: 899/1024 [MB] (10 MBps) [2024-11-10T05:26:39.919Z] Copying: 910/1024 [MB] (10 MBps) [2024-11-10T05:26:40.863Z] Copying: 921/1024 [MB] (10 MBps) [2024-11-10T05:26:41.806Z] Copying: 940/1024 [MB] (18 MBps) [2024-11-10T05:26:43.194Z] Copying: 957/1024 [MB] (17 MBps) [2024-11-10T05:26:44.135Z] Copying: 968/1024 [MB] (10 MBps) [2024-11-10T05:26:45.077Z] Copying: 978/1024 [MB] (10 MBps) [2024-11-10T05:26:46.021Z] Copying: 1020/1024 [MB] (41 MBps) [2024-11-10T05:26:46.021Z] Copying: 1048520/1048576 [kB] (3696 kBps) [2024-11-10T05:26:46.021Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-10 05:26:45.858006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.785 [2024-11-10 05:26:45.858081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:52.785 [2024-11-10 05:26:45.858098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:52.785 [2024-11-10 05:26:45.858108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.785 [2024-11-10 05:26:45.860763] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:52.785 [2024-11-10 05:26:45.865071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.785 [2024-11-10 05:26:45.865123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:52.785 [2024-11-10 05:26:45.865135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.110 ms 00:24:52.785 [2024-11-10 05:26:45.865144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.785 [2024-11-10 05:26:45.876000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.785 [2024-11-10 05:26:45.876062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:52.785 [2024-11-10 05:26:45.876079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.052 ms 00:24:52.785 [2024-11-10 05:26:45.876115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.785 [2024-11-10 05:26:45.901052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.785 [2024-11-10 05:26:45.901105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:52.785 [2024-11-10 05:26:45.901127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.916 ms 00:24:52.785 [2024-11-10 05:26:45.901136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.785 [2024-11-10 05:26:45.907313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.785 [2024-11-10 05:26:45.907514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:52.785 [2024-11-10 05:26:45.907535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.137 ms 00:24:52.785 [2024-11-10 05:26:45.907553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.785 [2024-11-10 05:26:45.909927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.785 [2024-11-10 05:26:45.909982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:52.785 [2024-11-10 05:26:45.910005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.311 ms 00:24:52.785 [2024-11-10 05:26:45.910014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.785 [2024-11-10 05:26:45.914658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.785 [2024-11-10 05:26:45.914857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:52.785 [2024-11-10 05:26:45.914877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.598 ms 00:24:52.785 [2024-11-10 05:26:45.914885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.048 [2024-11-10 05:26:46.054185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.048 [2024-11-10 05:26:46.054379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:53.048 [2024-11-10 05:26:46.054451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 139.256 ms 00:24:53.048 [2024-11-10 05:26:46.054476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.048 [2024-11-10 05:26:46.057196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.048 [2024-11-10 05:26:46.057364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:53.048 [2024-11-10 05:26:46.057418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.670 ms 00:24:53.048 [2024-11-10 05:26:46.057440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.048 [2024-11-10 05:26:46.059478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.048 [2024-11-10 05:26:46.059631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:53.048 [2024-11-10 05:26:46.059686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.990 ms 00:24:53.048 [2024-11-10 05:26:46.059707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.048 [2024-11-10 05:26:46.061401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.048 [2024-11-10 05:26:46.061558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:53.048 [2024-11-10 05:26:46.061616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.648 ms 00:24:53.048 [2024-11-10 05:26:46.061637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.048 [2024-11-10 05:26:46.063211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.048 [2024-11-10 05:26:46.063363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:53.048 [2024-11-10 05:26:46.063421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.466 ms 00:24:53.048 [2024-11-10 05:26:46.063446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.048 [2024-11-10 05:26:46.063492] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:53.048 [2024-11-10 05:26:46.063522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 105728 / 261120 wr_cnt: 1 state: open 00:24:53.048 [2024-11-10 05:26:46.063559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:53.048 [2024-11-10 05:26:46.063588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:53.048 [2024-11-10 05:26:46.063617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:53.048 [2024-11-10 05:26:46.063691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:53.048 [2024-11-10 05:26:46.063721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:53.048 [2024-11-10 05:26:46.063750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:53.048 [2024-11-10 05:26:46.063778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:53.048 [2024-11-10 05:26:46.063807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:53.048 [2024-11-10 05:26:46.063837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:53.048 [2024-11-10 05:26:46.063907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:53.048 [2024-11-10 05:26:46.063937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:53.049 [2024-11-10 05:26:46.063965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:53.049 [2024-11-10 05:26:46.064018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:53.049 [2024-11-10 05:26:46.064096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:53.049 [2024-11-10 05:26:46.064128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:53.049 [2024-11-10 05:26:46.064159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:53.049 [2024-11-10 05:26:46.064305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:53.049 [2024-11-10 05:26:46.064335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:53.049 [2024-11-10 05:26:46.064364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:53.049 [2024-11-10 05:26:46.064424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:53.049 [2024-11-10 05:26:46.064456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:53.049 [2024-11-10 05:26:46.064484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:53.049 [2024-11-10 05:26:46.064513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:53.049 [2024-11-10 05:26:46.064541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:53.049 [2024-11-10 05:26:46.064610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:53.049 [2024-11-10 05:26:46.064641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:53.049 [2024-11-10 05:26:46.064669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:53.049 [2024-11-10 05:26:46.064697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:53.049 [2024-11-10 05:26:46.064726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:53.049 [2024-11-10 05:26:46.064783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:53.049 [2024-11-10 05:26:46.064813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:53.049 [2024-11-10 05:26:46.064876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:53.049 [2024-11-10 05:26:46.064906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:53.049 [2024-11-10 05:26:46.064958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:53.049 [2024-11-10 05:26:46.065002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:53.049 [2024-11-10 05:26:46.065033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:53.049 [2024-11-10 05:26:46.065096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:53.049 [2024-11-10 05:26:46.065128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:53.049 [2024-11-10 05:26:46.065156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:53.049 [2024-11-10 05:26:46.065184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:53.049 [2024-11-10 05:26:46.065242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:53.049 [2024-11-10 05:26:46.065272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:53.049 [2024-11-10 05:26:46.065329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:53.049 [2024-11-10 05:26:46.065359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:53.049 [2024-11-10 05:26:46.065415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:53.049 [2024-11-10 05:26:46.065447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:53.049 [2024-11-10 05:26:46.065475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:53.049 [2024-11-10 05:26:46.065537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:53.049 [2024-11-10 05:26:46.065570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:53.049 [2024-11-10 05:26:46.065599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:53.049 [2024-11-10 05:26:46.065627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:53.049 [2024-11-10 05:26:46.065685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:53.049 [2024-11-10 05:26:46.065747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:53.049 [2024-11-10 05:26:46.065778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:53.049 [2024-11-10 05:26:46.065834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:53.049 [2024-11-10 05:26:46.065864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:53.049 [2024-11-10 05:26:46.066020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:53.049 [2024-11-10 05:26:46.066052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:53.049 [2024-11-10 05:26:46.066175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:53.049 [2024-11-10 05:26:46.066208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:53.049 [2024-11-10 05:26:46.066269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:53.049 [2024-11-10 05:26:46.066301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:53.049 [2024-11-10 05:26:46.066330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:53.049 [2024-11-10 05:26:46.066385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:53.049 [2024-11-10 05:26:46.066417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:53.049 [2024-11-10 05:26:46.066445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:53.049 [2024-11-10 05:26:46.066489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:53.049 [2024-11-10 05:26:46.066498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:53.049 [2024-11-10 05:26:46.066508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:53.049 [2024-11-10 05:26:46.066517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:53.049 [2024-11-10 05:26:46.066526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:53.049 [2024-11-10 05:26:46.066534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:53.049 [2024-11-10 05:26:46.066542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:53.049 [2024-11-10 05:26:46.066550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:53.049 [2024-11-10 05:26:46.066558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:53.049 [2024-11-10 05:26:46.066566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:53.049 [2024-11-10 05:26:46.066574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:53.049 [2024-11-10 05:26:46.066582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:53.049 [2024-11-10 05:26:46.066591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:53.049 [2024-11-10 05:26:46.066599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:53.049 [2024-11-10 05:26:46.066607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:53.049 [2024-11-10 05:26:46.066614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:53.049 [2024-11-10 05:26:46.066622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:53.049 [2024-11-10 05:26:46.066630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:53.049 [2024-11-10 05:26:46.066638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:53.049 [2024-11-10 05:26:46.066645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:53.049 [2024-11-10 05:26:46.066653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:53.049 [2024-11-10 05:26:46.066660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:53.049 [2024-11-10 05:26:46.066668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:53.049 [2024-11-10 05:26:46.066676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:53.049 [2024-11-10 05:26:46.066683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:53.049 [2024-11-10 05:26:46.066691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:53.049 [2024-11-10 05:26:46.066698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:53.049 [2024-11-10 05:26:46.066705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:53.049 [2024-11-10 05:26:46.066713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:53.049 [2024-11-10 05:26:46.066721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:53.049 [2024-11-10 05:26:46.066730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:53.049 [2024-11-10 05:26:46.066738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:53.049 [2024-11-10 05:26:46.066745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:53.049 [2024-11-10 05:26:46.066762] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:53.049 [2024-11-10 05:26:46.066781] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 63757667-739a-4258-a1db-5238ea6e5f8e 00:24:53.050 [2024-11-10 05:26:46.066794] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 105728 00:24:53.050 [2024-11-10 05:26:46.066803] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 106688 00:24:53.050 [2024-11-10 05:26:46.066810] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 105728 00:24:53.050 [2024-11-10 05:26:46.066820] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0091 00:24:53.050 [2024-11-10 05:26:46.066843] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:53.050 [2024-11-10 05:26:46.066856] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:53.050 [2024-11-10 05:26:46.066864] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:53.050 [2024-11-10 05:26:46.066871] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:53.050 [2024-11-10 05:26:46.066878] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:53.050 [2024-11-10 05:26:46.066893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.050 [2024-11-10 05:26:46.066909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:53.050 [2024-11-10 05:26:46.066919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.402 ms 00:24:53.050 [2024-11-10 05:26:46.066927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.050 [2024-11-10 05:26:46.069191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.050 [2024-11-10 05:26:46.069224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:53.050 [2024-11-10 05:26:46.069237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.237 ms 00:24:53.050 [2024-11-10 05:26:46.069247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.050 [2024-11-10 05:26:46.069362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.050 [2024-11-10 05:26:46.069372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:53.050 [2024-11-10 05:26:46.069388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:24:53.050 [2024-11-10 05:26:46.069396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.050 [2024-11-10 05:26:46.076261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:53.050 [2024-11-10 05:26:46.076410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:53.050 [2024-11-10 05:26:46.076463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:53.050 [2024-11-10 05:26:46.076485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.050 [2024-11-10 05:26:46.076560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:53.050 [2024-11-10 05:26:46.076582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:53.050 [2024-11-10 05:26:46.076607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:53.050 [2024-11-10 05:26:46.076632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.050 [2024-11-10 05:26:46.076688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:53.050 [2024-11-10 05:26:46.076769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:53.050 [2024-11-10 05:26:46.076790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:53.050 [2024-11-10 05:26:46.076813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.050 [2024-11-10 05:26:46.076839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:53.050 [2024-11-10 05:26:46.076860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:53.050 [2024-11-10 05:26:46.076879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:53.050 [2024-11-10 05:26:46.077099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.050 [2024-11-10 05:26:46.089704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:53.050 [2024-11-10 05:26:46.089880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:53.050 [2024-11-10 05:26:46.089897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:53.050 [2024-11-10 05:26:46.089907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.050 [2024-11-10 05:26:46.099458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:53.050 [2024-11-10 05:26:46.099606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:53.050 [2024-11-10 05:26:46.099629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:53.050 [2024-11-10 05:26:46.099638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.050 [2024-11-10 05:26:46.099714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:53.050 [2024-11-10 05:26:46.099725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:53.050 [2024-11-10 05:26:46.099733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:53.050 [2024-11-10 05:26:46.099741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.050 [2024-11-10 05:26:46.099774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:53.050 [2024-11-10 05:26:46.099784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:53.050 [2024-11-10 05:26:46.099792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:53.050 [2024-11-10 05:26:46.099800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.050 [2024-11-10 05:26:46.099874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:53.050 [2024-11-10 05:26:46.099888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:53.050 [2024-11-10 05:26:46.099896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:53.050 [2024-11-10 05:26:46.099904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.050 [2024-11-10 05:26:46.099932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:53.050 [2024-11-10 05:26:46.099942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:53.050 [2024-11-10 05:26:46.099950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:53.050 [2024-11-10 05:26:46.099958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.050 [2024-11-10 05:26:46.100272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:53.050 [2024-11-10 05:26:46.100307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:53.050 [2024-11-10 05:26:46.100390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:53.050 [2024-11-10 05:26:46.100413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.050 [2024-11-10 05:26:46.100494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:53.050 [2024-11-10 05:26:46.100644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:53.050 [2024-11-10 05:26:46.100657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:53.050 [2024-11-10 05:26:46.100666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.050 [2024-11-10 05:26:46.100811] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 244.588 ms, result 0 00:24:53.993 00:24:53.993 00:24:53.993 05:26:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:24:55.400 05:26:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:55.400 [2024-11-10 05:26:48.610487] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:24:55.400 [2024-11-10 05:26:48.610593] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90700 ] 00:24:55.678 [2024-11-10 05:26:48.752569] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:55.678 [2024-11-10 05:26:48.790376] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:55.678 [2024-11-10 05:26:48.890756] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:55.678 [2024-11-10 05:26:48.890830] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:55.940 [2024-11-10 05:26:49.052195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.940 [2024-11-10 05:26:49.052261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:55.940 [2024-11-10 05:26:49.052280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:55.940 [2024-11-10 05:26:49.052292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.940 [2024-11-10 05:26:49.052351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.940 [2024-11-10 05:26:49.052363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:55.940 [2024-11-10 05:26:49.052376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:24:55.940 [2024-11-10 05:26:49.052384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.940 [2024-11-10 05:26:49.052405] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:55.940 [2024-11-10 05:26:49.052696] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:55.940 [2024-11-10 05:26:49.052713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.940 [2024-11-10 05:26:49.052725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:55.940 [2024-11-10 05:26:49.052734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:24:55.940 [2024-11-10 05:26:49.052745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.940 [2024-11-10 05:26:49.054548] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:55.940 [2024-11-10 05:26:49.058851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.940 [2024-11-10 05:26:49.058909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:55.940 [2024-11-10 05:26:49.058920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.314 ms 00:24:55.940 [2024-11-10 05:26:49.058929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.940 [2024-11-10 05:26:49.059050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.940 [2024-11-10 05:26:49.059061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:55.940 [2024-11-10 05:26:49.059070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:24:55.940 [2024-11-10 05:26:49.059078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.940 [2024-11-10 05:26:49.067538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.940 [2024-11-10 05:26:49.067587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:55.940 [2024-11-10 05:26:49.067599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.396 ms 00:24:55.940 [2024-11-10 05:26:49.067611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.940 [2024-11-10 05:26:49.067713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.940 [2024-11-10 05:26:49.067723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:55.940 [2024-11-10 05:26:49.067732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:24:55.940 [2024-11-10 05:26:49.067740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.940 [2024-11-10 05:26:49.067796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.940 [2024-11-10 05:26:49.067807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:55.940 [2024-11-10 05:26:49.067816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:55.940 [2024-11-10 05:26:49.067824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.940 [2024-11-10 05:26:49.067851] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:55.940 [2024-11-10 05:26:49.069910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.940 [2024-11-10 05:26:49.070104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:55.940 [2024-11-10 05:26:49.070122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.068 ms 00:24:55.940 [2024-11-10 05:26:49.070139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.940 [2024-11-10 05:26:49.070180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.940 [2024-11-10 05:26:49.070188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:55.940 [2024-11-10 05:26:49.070196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:24:55.940 [2024-11-10 05:26:49.070203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.940 [2024-11-10 05:26:49.070234] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:55.940 [2024-11-10 05:26:49.070258] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:55.940 [2024-11-10 05:26:49.070296] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:55.940 [2024-11-10 05:26:49.070316] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:55.940 [2024-11-10 05:26:49.070421] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:55.940 [2024-11-10 05:26:49.070432] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:55.940 [2024-11-10 05:26:49.070443] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:55.940 [2024-11-10 05:26:49.070456] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:55.940 [2024-11-10 05:26:49.070467] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:55.940 [2024-11-10 05:26:49.070475] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:55.941 [2024-11-10 05:26:49.070483] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:55.941 [2024-11-10 05:26:49.070490] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:55.941 [2024-11-10 05:26:49.070502] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:55.941 [2024-11-10 05:26:49.070510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.941 [2024-11-10 05:26:49.070518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:55.941 [2024-11-10 05:26:49.070525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:24:55.941 [2024-11-10 05:26:49.070532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.941 [2024-11-10 05:26:49.070617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.941 [2024-11-10 05:26:49.070629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:55.941 [2024-11-10 05:26:49.070638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:24:55.941 [2024-11-10 05:26:49.070646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.941 [2024-11-10 05:26:49.070748] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:55.941 [2024-11-10 05:26:49.070765] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:55.941 [2024-11-10 05:26:49.070775] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:55.941 [2024-11-10 05:26:49.070790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:55.941 [2024-11-10 05:26:49.070798] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:55.941 [2024-11-10 05:26:49.070807] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:55.941 [2024-11-10 05:26:49.070815] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:55.941 [2024-11-10 05:26:49.070823] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:55.941 [2024-11-10 05:26:49.070832] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:55.941 [2024-11-10 05:26:49.070840] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:55.941 [2024-11-10 05:26:49.070848] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:55.941 [2024-11-10 05:26:49.070856] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:55.941 [2024-11-10 05:26:49.070864] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:55.941 [2024-11-10 05:26:49.070872] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:55.941 [2024-11-10 05:26:49.070881] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:55.941 [2024-11-10 05:26:49.070889] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:55.941 [2024-11-10 05:26:49.070897] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:55.941 [2024-11-10 05:26:49.070907] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:55.941 [2024-11-10 05:26:49.070917] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:55.941 [2024-11-10 05:26:49.070925] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:55.941 [2024-11-10 05:26:49.070933] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:55.941 [2024-11-10 05:26:49.070941] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:55.941 [2024-11-10 05:26:49.070949] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:55.941 [2024-11-10 05:26:49.070957] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:55.941 [2024-11-10 05:26:49.070964] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:55.941 [2024-11-10 05:26:49.070972] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:55.941 [2024-11-10 05:26:49.070980] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:55.941 [2024-11-10 05:26:49.070988] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:55.941 [2024-11-10 05:26:49.071012] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:55.941 [2024-11-10 05:26:49.071019] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:55.941 [2024-11-10 05:26:49.071026] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:55.941 [2024-11-10 05:26:49.071032] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:55.941 [2024-11-10 05:26:49.071039] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:55.941 [2024-11-10 05:26:49.071050] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:55.941 [2024-11-10 05:26:49.071057] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:55.941 [2024-11-10 05:26:49.071064] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:55.941 [2024-11-10 05:26:49.071070] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:55.941 [2024-11-10 05:26:49.071077] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:55.941 [2024-11-10 05:26:49.071084] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:55.941 [2024-11-10 05:26:49.071090] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:55.941 [2024-11-10 05:26:49.071097] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:55.941 [2024-11-10 05:26:49.071105] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:55.941 [2024-11-10 05:26:49.071111] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:55.941 [2024-11-10 05:26:49.071118] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:55.941 [2024-11-10 05:26:49.071128] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:55.941 [2024-11-10 05:26:49.071149] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:55.941 [2024-11-10 05:26:49.071161] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:55.941 [2024-11-10 05:26:49.071169] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:55.941 [2024-11-10 05:26:49.071176] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:55.941 [2024-11-10 05:26:49.071186] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:55.941 [2024-11-10 05:26:49.071193] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:55.941 [2024-11-10 05:26:49.071199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:55.941 [2024-11-10 05:26:49.071206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:55.941 [2024-11-10 05:26:49.071214] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:55.941 [2024-11-10 05:26:49.071224] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:55.941 [2024-11-10 05:26:49.071233] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:55.941 [2024-11-10 05:26:49.071241] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:55.941 [2024-11-10 05:26:49.071248] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:55.941 [2024-11-10 05:26:49.071255] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:55.941 [2024-11-10 05:26:49.071262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:55.941 [2024-11-10 05:26:49.071269] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:55.941 [2024-11-10 05:26:49.071276] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:55.941 [2024-11-10 05:26:49.071283] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:55.941 [2024-11-10 05:26:49.071290] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:55.941 [2024-11-10 05:26:49.071297] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:55.941 [2024-11-10 05:26:49.071306] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:55.941 [2024-11-10 05:26:49.071313] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:55.941 [2024-11-10 05:26:49.071320] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:55.941 [2024-11-10 05:26:49.071327] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:55.941 [2024-11-10 05:26:49.071335] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:55.941 [2024-11-10 05:26:49.071344] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:55.941 [2024-11-10 05:26:49.071352] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:55.941 [2024-11-10 05:26:49.071359] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:55.941 [2024-11-10 05:26:49.071367] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:55.941 [2024-11-10 05:26:49.071374] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:55.941 [2024-11-10 05:26:49.071381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.941 [2024-11-10 05:26:49.071389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:55.941 [2024-11-10 05:26:49.071396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.702 ms 00:24:55.941 [2024-11-10 05:26:49.071407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.941 [2024-11-10 05:26:49.092749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.941 [2024-11-10 05:26:49.092818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:55.941 [2024-11-10 05:26:49.092838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.293 ms 00:24:55.941 [2024-11-10 05:26:49.092849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.941 [2024-11-10 05:26:49.092977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.941 [2024-11-10 05:26:49.093018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:55.941 [2024-11-10 05:26:49.093031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:24:55.941 [2024-11-10 05:26:49.093051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.942 [2024-11-10 05:26:49.105564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.942 [2024-11-10 05:26:49.105611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:55.942 [2024-11-10 05:26:49.105623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.428 ms 00:24:55.942 [2024-11-10 05:26:49.105635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.942 [2024-11-10 05:26:49.105670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.942 [2024-11-10 05:26:49.105679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:55.942 [2024-11-10 05:26:49.105687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:24:55.942 [2024-11-10 05:26:49.105695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.942 [2024-11-10 05:26:49.106219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.942 [2024-11-10 05:26:49.106253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:55.942 [2024-11-10 05:26:49.106264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.473 ms 00:24:55.942 [2024-11-10 05:26:49.106278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.942 [2024-11-10 05:26:49.106427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.942 [2024-11-10 05:26:49.106441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:55.942 [2024-11-10 05:26:49.106450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:24:55.942 [2024-11-10 05:26:49.106462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.942 [2024-11-10 05:26:49.112829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.942 [2024-11-10 05:26:49.112874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:55.942 [2024-11-10 05:26:49.112895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.342 ms 00:24:55.942 [2024-11-10 05:26:49.112904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.942 [2024-11-10 05:26:49.116807] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:24:55.942 [2024-11-10 05:26:49.116857] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:55.942 [2024-11-10 05:26:49.116869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.942 [2024-11-10 05:26:49.116878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:55.942 [2024-11-10 05:26:49.116887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.849 ms 00:24:55.942 [2024-11-10 05:26:49.116894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.942 [2024-11-10 05:26:49.132587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.942 [2024-11-10 05:26:49.132630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:55.942 [2024-11-10 05:26:49.132650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.644 ms 00:24:55.942 [2024-11-10 05:26:49.132658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.942 [2024-11-10 05:26:49.135507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.942 [2024-11-10 05:26:49.135674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:55.942 [2024-11-10 05:26:49.135692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.791 ms 00:24:55.942 [2024-11-10 05:26:49.135699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.942 [2024-11-10 05:26:49.138215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.942 [2024-11-10 05:26:49.138250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:55.942 [2024-11-10 05:26:49.138259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.478 ms 00:24:55.942 [2024-11-10 05:26:49.138267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.942 [2024-11-10 05:26:49.138626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.942 [2024-11-10 05:26:49.138645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:55.942 [2024-11-10 05:26:49.138658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.282 ms 00:24:55.942 [2024-11-10 05:26:49.138666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.942 [2024-11-10 05:26:49.161174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.942 [2024-11-10 05:26:49.161239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:55.942 [2024-11-10 05:26:49.161253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.483 ms 00:24:55.942 [2024-11-10 05:26:49.161261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.942 [2024-11-10 05:26:49.169204] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:56.202 [2024-11-10 05:26:49.172345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.202 [2024-11-10 05:26:49.172386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:56.202 [2024-11-10 05:26:49.172405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.034 ms 00:24:56.202 [2024-11-10 05:26:49.172414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.202 [2024-11-10 05:26:49.172498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.203 [2024-11-10 05:26:49.172513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:56.203 [2024-11-10 05:26:49.172522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:24:56.203 [2024-11-10 05:26:49.172530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.203 [2024-11-10 05:26:49.174238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.203 [2024-11-10 05:26:49.174397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:56.203 [2024-11-10 05:26:49.174416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.671 ms 00:24:56.203 [2024-11-10 05:26:49.174431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.203 [2024-11-10 05:26:49.174464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.203 [2024-11-10 05:26:49.174476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:56.203 [2024-11-10 05:26:49.174485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:56.203 [2024-11-10 05:26:49.174492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.203 [2024-11-10 05:26:49.174531] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:56.203 [2024-11-10 05:26:49.174541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.203 [2024-11-10 05:26:49.174549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:56.203 [2024-11-10 05:26:49.174557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:56.203 [2024-11-10 05:26:49.174564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.203 [2024-11-10 05:26:49.179501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.203 [2024-11-10 05:26:49.179557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:56.203 [2024-11-10 05:26:49.179567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.916 ms 00:24:56.203 [2024-11-10 05:26:49.179579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.203 [2024-11-10 05:26:49.179659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.203 [2024-11-10 05:26:49.179670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:56.203 [2024-11-10 05:26:49.179678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:24:56.203 [2024-11-10 05:26:49.179686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.203 [2024-11-10 05:26:49.180788] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 128.143 ms, result 0 00:24:57.144  [2024-11-10T05:26:51.767Z] Copying: 996/1048576 [kB] (996 kBps) [2024-11-10T05:26:52.711Z] Copying: 4384/1048576 [kB] (3388 kBps) [2024-11-10T05:26:53.654Z] Copying: 20/1024 [MB] (15 MBps) [2024-11-10T05:26:54.598Z] Copying: 50/1024 [MB] (30 MBps) [2024-11-10T05:26:55.541Z] Copying: 79/1024 [MB] (29 MBps) [2024-11-10T05:26:56.484Z] Copying: 102/1024 [MB] (22 MBps) [2024-11-10T05:26:57.426Z] Copying: 140/1024 [MB] (38 MBps) [2024-11-10T05:26:58.369Z] Copying: 161/1024 [MB] (20 MBps) [2024-11-10T05:26:59.757Z] Copying: 185/1024 [MB] (24 MBps) [2024-11-10T05:27:00.698Z] Copying: 212/1024 [MB] (27 MBps) [2024-11-10T05:27:01.640Z] Copying: 243/1024 [MB] (30 MBps) [2024-11-10T05:27:02.584Z] Copying: 280/1024 [MB] (37 MBps) [2024-11-10T05:27:03.524Z] Copying: 312/1024 [MB] (31 MBps) [2024-11-10T05:27:04.468Z] Copying: 351/1024 [MB] (38 MBps) [2024-11-10T05:27:05.411Z] Copying: 382/1024 [MB] (31 MBps) [2024-11-10T05:27:06.404Z] Copying: 404/1024 [MB] (21 MBps) [2024-11-10T05:27:07.790Z] Copying: 435/1024 [MB] (30 MBps) [2024-11-10T05:27:08.362Z] Copying: 464/1024 [MB] (28 MBps) [2024-11-10T05:27:09.748Z] Copying: 502/1024 [MB] (38 MBps) [2024-11-10T05:27:10.691Z] Copying: 532/1024 [MB] (30 MBps) [2024-11-10T05:27:11.635Z] Copying: 559/1024 [MB] (27 MBps) [2024-11-10T05:27:12.579Z] Copying: 591/1024 [MB] (31 MBps) [2024-11-10T05:27:13.521Z] Copying: 621/1024 [MB] (30 MBps) [2024-11-10T05:27:14.465Z] Copying: 649/1024 [MB] (28 MBps) [2024-11-10T05:27:15.407Z] Copying: 680/1024 [MB] (30 MBps) [2024-11-10T05:27:16.793Z] Copying: 704/1024 [MB] (24 MBps) [2024-11-10T05:27:17.365Z] Copying: 731/1024 [MB] (27 MBps) [2024-11-10T05:27:18.749Z] Copying: 756/1024 [MB] (24 MBps) [2024-11-10T05:27:19.693Z] Copying: 780/1024 [MB] (23 MBps) [2024-11-10T05:27:20.635Z] Copying: 799/1024 [MB] (19 MBps) [2024-11-10T05:27:21.577Z] Copying: 825/1024 [MB] (26 MBps) [2024-11-10T05:27:22.540Z] Copying: 858/1024 [MB] (33 MBps) [2024-11-10T05:27:23.491Z] Copying: 882/1024 [MB] (23 MBps) [2024-11-10T05:27:24.435Z] Copying: 905/1024 [MB] (23 MBps) [2024-11-10T05:27:25.377Z] Copying: 927/1024 [MB] (21 MBps) [2024-11-10T05:27:26.765Z] Copying: 957/1024 [MB] (29 MBps) [2024-11-10T05:27:27.708Z] Copying: 987/1024 [MB] (30 MBps) [2024-11-10T05:27:27.708Z] Copying: 1016/1024 [MB] (29 MBps) [2024-11-10T05:27:27.970Z] Copying: 1024/1024 [MB] (average 26 MBps)[2024-11-10 05:27:27.726869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.735 [2024-11-10 05:27:27.727046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:34.735 [2024-11-10 05:27:27.727065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:34.735 [2024-11-10 05:27:27.727076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.735 [2024-11-10 05:27:27.727104] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:34.735 [2024-11-10 05:27:27.727980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.735 [2024-11-10 05:27:27.728028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:34.735 [2024-11-10 05:27:27.728051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.857 ms 00:25:34.735 [2024-11-10 05:27:27.728061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.735 [2024-11-10 05:27:27.728367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.735 [2024-11-10 05:27:27.728380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:34.735 [2024-11-10 05:27:27.728390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.238 ms 00:25:34.735 [2024-11-10 05:27:27.728399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.735 [2024-11-10 05:27:27.742748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.735 [2024-11-10 05:27:27.742806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:34.735 [2024-11-10 05:27:27.742820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.323 ms 00:25:34.735 [2024-11-10 05:27:27.742837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.735 [2024-11-10 05:27:27.749095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.735 [2024-11-10 05:27:27.749148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:34.735 [2024-11-10 05:27:27.749160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.221 ms 00:25:34.735 [2024-11-10 05:27:27.749169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.735 [2024-11-10 05:27:27.751939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.735 [2024-11-10 05:27:27.752001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:34.735 [2024-11-10 05:27:27.752013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.704 ms 00:25:34.735 [2024-11-10 05:27:27.752021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.735 [2024-11-10 05:27:27.756855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.735 [2024-11-10 05:27:27.756930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:34.735 [2024-11-10 05:27:27.756942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.792 ms 00:25:34.735 [2024-11-10 05:27:27.756960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.735 [2024-11-10 05:27:27.761388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.735 [2024-11-10 05:27:27.761437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:34.735 [2024-11-10 05:27:27.761449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.367 ms 00:25:34.735 [2024-11-10 05:27:27.761457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.735 [2024-11-10 05:27:27.764734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.735 [2024-11-10 05:27:27.764784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:34.735 [2024-11-10 05:27:27.764795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.260 ms 00:25:34.735 [2024-11-10 05:27:27.764802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.735 [2024-11-10 05:27:27.767567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.735 [2024-11-10 05:27:27.767629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:34.735 [2024-11-10 05:27:27.767639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.723 ms 00:25:34.735 [2024-11-10 05:27:27.767646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.735 [2024-11-10 05:27:27.769890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.735 [2024-11-10 05:27:27.769937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:34.735 [2024-11-10 05:27:27.769947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.203 ms 00:25:34.735 [2024-11-10 05:27:27.769955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.735 [2024-11-10 05:27:27.772155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.735 [2024-11-10 05:27:27.772200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:34.735 [2024-11-10 05:27:27.772210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.116 ms 00:25:34.735 [2024-11-10 05:27:27.772216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.735 [2024-11-10 05:27:27.772255] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:34.735 [2024-11-10 05:27:27.772270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:25:34.735 [2024-11-10 05:27:27.772281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:25:34.735 [2024-11-10 05:27:27.772290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:34.735 [2024-11-10 05:27:27.772298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:34.735 [2024-11-10 05:27:27.772306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:34.735 [2024-11-10 05:27:27.772314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:34.735 [2024-11-10 05:27:27.772322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:34.735 [2024-11-10 05:27:27.772330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:34.735 [2024-11-10 05:27:27.772339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:34.735 [2024-11-10 05:27:27.772347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:34.735 [2024-11-10 05:27:27.772354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:34.735 [2024-11-10 05:27:27.772362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:34.735 [2024-11-10 05:27:27.772370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:34.735 [2024-11-10 05:27:27.772377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:34.735 [2024-11-10 05:27:27.772385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:34.735 [2024-11-10 05:27:27.772392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:34.735 [2024-11-10 05:27:27.772400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:34.735 [2024-11-10 05:27:27.772407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:34.735 [2024-11-10 05:27:27.772414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:34.735 [2024-11-10 05:27:27.772422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:34.735 [2024-11-10 05:27:27.772429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:34.735 [2024-11-10 05:27:27.772436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:34.735 [2024-11-10 05:27:27.772443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:34.735 [2024-11-10 05:27:27.772450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:34.735 [2024-11-10 05:27:27.772457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:34.735 [2024-11-10 05:27:27.772465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:34.735 [2024-11-10 05:27:27.772473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:34.735 [2024-11-10 05:27:27.772481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:34.735 [2024-11-10 05:27:27.772488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:34.735 [2024-11-10 05:27:27.772496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:34.735 [2024-11-10 05:27:27.772512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:34.735 [2024-11-10 05:27:27.772520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:34.735 [2024-11-10 05:27:27.772528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:34.735 [2024-11-10 05:27:27.772536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:34.735 [2024-11-10 05:27:27.772543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:34.735 [2024-11-10 05:27:27.772553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:34.735 [2024-11-10 05:27:27.772564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:34.735 [2024-11-10 05:27:27.772576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:34.735 [2024-11-10 05:27:27.772589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:34.735 [2024-11-10 05:27:27.772600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:34.735 [2024-11-10 05:27:27.772614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:34.735 [2024-11-10 05:27:27.772626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:34.735 [2024-11-10 05:27:27.772640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:34.735 [2024-11-10 05:27:27.772649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:34.735 [2024-11-10 05:27:27.772657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:34.735 [2024-11-10 05:27:27.772666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:34.735 [2024-11-10 05:27:27.772674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:34.735 [2024-11-10 05:27:27.772683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:34.736 [2024-11-10 05:27:27.772691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:34.736 [2024-11-10 05:27:27.772699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:34.736 [2024-11-10 05:27:27.772706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:34.736 [2024-11-10 05:27:27.772715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:34.736 [2024-11-10 05:27:27.772725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:34.736 [2024-11-10 05:27:27.772738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:34.736 [2024-11-10 05:27:27.772751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:34.736 [2024-11-10 05:27:27.772763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:34.736 [2024-11-10 05:27:27.772775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:34.736 [2024-11-10 05:27:27.772786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:34.736 [2024-11-10 05:27:27.772797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:34.736 [2024-11-10 05:27:27.772812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:34.736 [2024-11-10 05:27:27.772826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:34.736 [2024-11-10 05:27:27.772837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:34.736 [2024-11-10 05:27:27.772847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:34.736 [2024-11-10 05:27:27.772855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:34.736 [2024-11-10 05:27:27.772864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:34.736 [2024-11-10 05:27:27.772873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:34.736 [2024-11-10 05:27:27.772881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:34.736 [2024-11-10 05:27:27.772889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:34.736 [2024-11-10 05:27:27.772897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:34.736 [2024-11-10 05:27:27.772916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:34.736 [2024-11-10 05:27:27.772924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:34.736 [2024-11-10 05:27:27.772932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:34.736 [2024-11-10 05:27:27.772940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:34.736 [2024-11-10 05:27:27.772951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:34.736 [2024-11-10 05:27:27.772959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:34.736 [2024-11-10 05:27:27.772967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:34.736 [2024-11-10 05:27:27.772975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:34.736 [2024-11-10 05:27:27.772984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:34.736 [2024-11-10 05:27:27.773010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:34.736 [2024-11-10 05:27:27.773019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:34.736 [2024-11-10 05:27:27.773028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:34.736 [2024-11-10 05:27:27.773036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:34.736 [2024-11-10 05:27:27.773044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:34.736 [2024-11-10 05:27:27.773052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:34.736 [2024-11-10 05:27:27.773060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:34.736 [2024-11-10 05:27:27.773068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:34.736 [2024-11-10 05:27:27.773077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:34.736 [2024-11-10 05:27:27.773084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:34.736 [2024-11-10 05:27:27.773092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:34.736 [2024-11-10 05:27:27.773101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:34.736 [2024-11-10 05:27:27.773108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:34.736 [2024-11-10 05:27:27.773118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:34.736 [2024-11-10 05:27:27.773127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:34.736 [2024-11-10 05:27:27.773136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:34.736 [2024-11-10 05:27:27.773144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:34.736 [2024-11-10 05:27:27.773152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:34.736 [2024-11-10 05:27:27.773159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:34.736 [2024-11-10 05:27:27.773168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:34.736 [2024-11-10 05:27:27.773180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:34.736 [2024-11-10 05:27:27.773193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:34.736 [2024-11-10 05:27:27.773217] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:34.736 [2024-11-10 05:27:27.773231] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 63757667-739a-4258-a1db-5238ea6e5f8e 00:25:34.736 [2024-11-10 05:27:27.773246] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:25:34.736 [2024-11-10 05:27:27.773276] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 158912 00:25:34.736 [2024-11-10 05:27:27.773284] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 156928 00:25:34.736 [2024-11-10 05:27:27.773293] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0126 00:25:34.736 [2024-11-10 05:27:27.773302] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:34.736 [2024-11-10 05:27:27.773310] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:34.736 [2024-11-10 05:27:27.773319] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:34.736 [2024-11-10 05:27:27.773326] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:34.736 [2024-11-10 05:27:27.773333] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:34.736 [2024-11-10 05:27:27.773342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.736 [2024-11-10 05:27:27.773356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:34.736 [2024-11-10 05:27:27.773364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.088 ms 00:25:34.736 [2024-11-10 05:27:27.773371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.736 [2024-11-10 05:27:27.775718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.736 [2024-11-10 05:27:27.775767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:34.736 [2024-11-10 05:27:27.775778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.324 ms 00:25:34.736 [2024-11-10 05:27:27.775786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.736 [2024-11-10 05:27:27.775899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.736 [2024-11-10 05:27:27.775913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:34.736 [2024-11-10 05:27:27.775922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:25:34.736 [2024-11-10 05:27:27.775932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.736 [2024-11-10 05:27:27.782651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:34.736 [2024-11-10 05:27:27.782701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:34.736 [2024-11-10 05:27:27.782712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:34.736 [2024-11-10 05:27:27.782721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.736 [2024-11-10 05:27:27.782780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:34.736 [2024-11-10 05:27:27.782788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:34.736 [2024-11-10 05:27:27.782796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:34.736 [2024-11-10 05:27:27.782810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.736 [2024-11-10 05:27:27.782878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:34.736 [2024-11-10 05:27:27.782888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:34.736 [2024-11-10 05:27:27.782897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:34.736 [2024-11-10 05:27:27.782904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.736 [2024-11-10 05:27:27.782919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:34.736 [2024-11-10 05:27:27.782928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:34.736 [2024-11-10 05:27:27.782936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:34.736 [2024-11-10 05:27:27.782944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.736 [2024-11-10 05:27:27.796698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:34.736 [2024-11-10 05:27:27.796754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:34.736 [2024-11-10 05:27:27.796765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:34.736 [2024-11-10 05:27:27.796775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.736 [2024-11-10 05:27:27.808020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:34.736 [2024-11-10 05:27:27.808090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:34.736 [2024-11-10 05:27:27.808102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:34.736 [2024-11-10 05:27:27.808112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.737 [2024-11-10 05:27:27.808176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:34.737 [2024-11-10 05:27:27.808186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:34.737 [2024-11-10 05:27:27.808195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:34.737 [2024-11-10 05:27:27.808204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.737 [2024-11-10 05:27:27.808247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:34.737 [2024-11-10 05:27:27.808256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:34.737 [2024-11-10 05:27:27.808265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:34.737 [2024-11-10 05:27:27.808273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.737 [2024-11-10 05:27:27.808344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:34.737 [2024-11-10 05:27:27.808358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:34.737 [2024-11-10 05:27:27.808372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:34.737 [2024-11-10 05:27:27.808380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.737 [2024-11-10 05:27:27.808408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:34.737 [2024-11-10 05:27:27.808418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:34.737 [2024-11-10 05:27:27.808427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:34.737 [2024-11-10 05:27:27.808435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.737 [2024-11-10 05:27:27.808476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:34.737 [2024-11-10 05:27:27.808497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:34.737 [2024-11-10 05:27:27.808506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:34.737 [2024-11-10 05:27:27.808515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.737 [2024-11-10 05:27:27.808559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:34.737 [2024-11-10 05:27:27.808577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:34.737 [2024-11-10 05:27:27.808591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:34.737 [2024-11-10 05:27:27.808600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.737 [2024-11-10 05:27:27.808736] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 81.837 ms, result 0 00:25:34.997 00:25:34.997 00:25:34.997 05:27:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:37.543 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:25:37.543 05:27:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:37.543 [2024-11-10 05:27:30.349867] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:25:37.543 [2024-11-10 05:27:30.350041] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91131 ] 00:25:37.543 [2024-11-10 05:27:30.501481] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:37.543 [2024-11-10 05:27:30.552573] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:37.543 [2024-11-10 05:27:30.668614] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:37.543 [2024-11-10 05:27:30.668704] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:37.807 [2024-11-10 05:27:30.829280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.807 [2024-11-10 05:27:30.829339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:37.807 [2024-11-10 05:27:30.829356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:37.807 [2024-11-10 05:27:30.829365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.807 [2024-11-10 05:27:30.829422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.807 [2024-11-10 05:27:30.829433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:37.807 [2024-11-10 05:27:30.829442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:25:37.807 [2024-11-10 05:27:30.829450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.807 [2024-11-10 05:27:30.829474] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:37.807 [2024-11-10 05:27:30.829758] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:37.807 [2024-11-10 05:27:30.829788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.807 [2024-11-10 05:27:30.829800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:37.807 [2024-11-10 05:27:30.829810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.316 ms 00:25:37.807 [2024-11-10 05:27:30.829820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.807 [2024-11-10 05:27:30.831475] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:37.807 [2024-11-10 05:27:30.835174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.807 [2024-11-10 05:27:30.835233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:37.807 [2024-11-10 05:27:30.835245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.702 ms 00:25:37.807 [2024-11-10 05:27:30.835253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.807 [2024-11-10 05:27:30.835330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.807 [2024-11-10 05:27:30.835347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:37.807 [2024-11-10 05:27:30.835356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:25:37.807 [2024-11-10 05:27:30.835368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.807 [2024-11-10 05:27:30.843452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.807 [2024-11-10 05:27:30.843497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:37.807 [2024-11-10 05:27:30.843508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.041 ms 00:25:37.807 [2024-11-10 05:27:30.843522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.807 [2024-11-10 05:27:30.843624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.807 [2024-11-10 05:27:30.843634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:37.807 [2024-11-10 05:27:30.843644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:25:37.807 [2024-11-10 05:27:30.843654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.807 [2024-11-10 05:27:30.843709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.807 [2024-11-10 05:27:30.843720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:37.807 [2024-11-10 05:27:30.843729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:37.807 [2024-11-10 05:27:30.843736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.807 [2024-11-10 05:27:30.843761] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:37.807 [2024-11-10 05:27:30.845830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.807 [2024-11-10 05:27:30.845869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:37.807 [2024-11-10 05:27:30.845879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.077 ms 00:25:37.807 [2024-11-10 05:27:30.845887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.807 [2024-11-10 05:27:30.845922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.807 [2024-11-10 05:27:30.845937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:37.807 [2024-11-10 05:27:30.845945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:25:37.807 [2024-11-10 05:27:30.845952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.807 [2024-11-10 05:27:30.845981] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:37.807 [2024-11-10 05:27:30.846023] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:37.807 [2024-11-10 05:27:30.846060] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:37.808 [2024-11-10 05:27:30.846076] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:37.808 [2024-11-10 05:27:30.846185] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:37.808 [2024-11-10 05:27:30.846196] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:37.808 [2024-11-10 05:27:30.846208] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:37.808 [2024-11-10 05:27:30.846218] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:37.808 [2024-11-10 05:27:30.846230] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:37.808 [2024-11-10 05:27:30.846238] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:37.808 [2024-11-10 05:27:30.846247] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:37.808 [2024-11-10 05:27:30.846254] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:37.808 [2024-11-10 05:27:30.846261] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:37.808 [2024-11-10 05:27:30.846269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.808 [2024-11-10 05:27:30.846277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:37.808 [2024-11-10 05:27:30.846286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.293 ms 00:25:37.808 [2024-11-10 05:27:30.846297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.808 [2024-11-10 05:27:30.846379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.808 [2024-11-10 05:27:30.846390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:37.808 [2024-11-10 05:27:30.846398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:25:37.808 [2024-11-10 05:27:30.846405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.808 [2024-11-10 05:27:30.846502] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:37.808 [2024-11-10 05:27:30.846514] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:37.808 [2024-11-10 05:27:30.846524] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:37.808 [2024-11-10 05:27:30.846539] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:37.808 [2024-11-10 05:27:30.846548] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:37.808 [2024-11-10 05:27:30.846556] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:37.808 [2024-11-10 05:27:30.846564] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:37.808 [2024-11-10 05:27:30.846572] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:37.808 [2024-11-10 05:27:30.846580] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:37.808 [2024-11-10 05:27:30.846588] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:37.808 [2024-11-10 05:27:30.846598] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:37.808 [2024-11-10 05:27:30.846606] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:37.808 [2024-11-10 05:27:30.846613] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:37.808 [2024-11-10 05:27:30.846621] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:37.808 [2024-11-10 05:27:30.846629] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:37.808 [2024-11-10 05:27:30.846637] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:37.808 [2024-11-10 05:27:30.846645] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:37.808 [2024-11-10 05:27:30.846654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:37.808 [2024-11-10 05:27:30.846662] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:37.808 [2024-11-10 05:27:30.846669] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:37.808 [2024-11-10 05:27:30.846678] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:37.808 [2024-11-10 05:27:30.846686] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:37.808 [2024-11-10 05:27:30.846693] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:37.808 [2024-11-10 05:27:30.846701] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:37.808 [2024-11-10 05:27:30.846708] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:37.808 [2024-11-10 05:27:30.846716] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:37.808 [2024-11-10 05:27:30.846728] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:37.808 [2024-11-10 05:27:30.846736] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:37.808 [2024-11-10 05:27:30.846744] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:37.808 [2024-11-10 05:27:30.846751] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:37.808 [2024-11-10 05:27:30.846759] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:37.808 [2024-11-10 05:27:30.846766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:37.808 [2024-11-10 05:27:30.846775] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:37.808 [2024-11-10 05:27:30.846783] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:37.808 [2024-11-10 05:27:30.846791] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:37.808 [2024-11-10 05:27:30.846799] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:37.808 [2024-11-10 05:27:30.846807] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:37.808 [2024-11-10 05:27:30.846823] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:37.808 [2024-11-10 05:27:30.846830] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:37.808 [2024-11-10 05:27:30.846838] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:37.808 [2024-11-10 05:27:30.846845] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:37.808 [2024-11-10 05:27:30.846853] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:37.808 [2024-11-10 05:27:30.846863] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:37.808 [2024-11-10 05:27:30.846871] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:37.808 [2024-11-10 05:27:30.846879] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:37.808 [2024-11-10 05:27:30.846888] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:37.808 [2024-11-10 05:27:30.846903] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:37.808 [2024-11-10 05:27:30.846913] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:37.808 [2024-11-10 05:27:30.846921] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:37.808 [2024-11-10 05:27:30.846931] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:37.808 [2024-11-10 05:27:30.846940] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:37.808 [2024-11-10 05:27:30.846948] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:37.808 [2024-11-10 05:27:30.846956] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:37.808 [2024-11-10 05:27:30.846966] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:37.808 [2024-11-10 05:27:30.846976] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:37.808 [2024-11-10 05:27:30.847001] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:37.808 [2024-11-10 05:27:30.847010] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:37.808 [2024-11-10 05:27:30.847017] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:37.808 [2024-11-10 05:27:30.847027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:37.808 [2024-11-10 05:27:30.847035] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:37.808 [2024-11-10 05:27:30.847042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:37.808 [2024-11-10 05:27:30.847049] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:37.808 [2024-11-10 05:27:30.847057] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:37.808 [2024-11-10 05:27:30.847064] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:37.808 [2024-11-10 05:27:30.847072] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:37.808 [2024-11-10 05:27:30.847080] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:37.808 [2024-11-10 05:27:30.847087] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:37.808 [2024-11-10 05:27:30.847095] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:37.808 [2024-11-10 05:27:30.847103] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:37.808 [2024-11-10 05:27:30.847111] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:37.808 [2024-11-10 05:27:30.847127] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:37.808 [2024-11-10 05:27:30.847135] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:37.808 [2024-11-10 05:27:30.847142] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:37.808 [2024-11-10 05:27:30.847152] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:37.808 [2024-11-10 05:27:30.847162] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:37.808 [2024-11-10 05:27:30.847172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.808 [2024-11-10 05:27:30.847185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:37.809 [2024-11-10 05:27:30.847193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.738 ms 00:25:37.809 [2024-11-10 05:27:30.847205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.809 [2024-11-10 05:27:30.874030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.809 [2024-11-10 05:27:30.874094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:37.809 [2024-11-10 05:27:30.874110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.768 ms 00:25:37.809 [2024-11-10 05:27:30.874120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.809 [2024-11-10 05:27:30.874241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.809 [2024-11-10 05:27:30.874262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:37.809 [2024-11-10 05:27:30.874272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:25:37.809 [2024-11-10 05:27:30.874287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.809 [2024-11-10 05:27:30.886558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.809 [2024-11-10 05:27:30.886610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:37.809 [2024-11-10 05:27:30.886623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.187 ms 00:25:37.809 [2024-11-10 05:27:30.886632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.809 [2024-11-10 05:27:30.886678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.809 [2024-11-10 05:27:30.886688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:37.809 [2024-11-10 05:27:30.886698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:37.809 [2024-11-10 05:27:30.886710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.809 [2024-11-10 05:27:30.887329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.809 [2024-11-10 05:27:30.887370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:37.809 [2024-11-10 05:27:30.887383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.560 ms 00:25:37.809 [2024-11-10 05:27:30.887398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.809 [2024-11-10 05:27:30.887555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.809 [2024-11-10 05:27:30.887566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:37.809 [2024-11-10 05:27:30.887582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.133 ms 00:25:37.809 [2024-11-10 05:27:30.887596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.809 [2024-11-10 05:27:30.894528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.809 [2024-11-10 05:27:30.894580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:37.809 [2024-11-10 05:27:30.894594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.898 ms 00:25:37.809 [2024-11-10 05:27:30.894603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.809 [2024-11-10 05:27:30.898415] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:37.809 [2024-11-10 05:27:30.898466] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:37.809 [2024-11-10 05:27:30.898479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.809 [2024-11-10 05:27:30.898487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:37.809 [2024-11-10 05:27:30.898496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.778 ms 00:25:37.809 [2024-11-10 05:27:30.898503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.809 [2024-11-10 05:27:30.914187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.809 [2024-11-10 05:27:30.914231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:37.809 [2024-11-10 05:27:30.914246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.625 ms 00:25:37.809 [2024-11-10 05:27:30.914254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.809 [2024-11-10 05:27:30.917093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.809 [2024-11-10 05:27:30.917136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:37.809 [2024-11-10 05:27:30.917145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.786 ms 00:25:37.809 [2024-11-10 05:27:30.917153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.809 [2024-11-10 05:27:30.919517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.809 [2024-11-10 05:27:30.919562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:37.809 [2024-11-10 05:27:30.919572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.319 ms 00:25:37.809 [2024-11-10 05:27:30.919579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.809 [2024-11-10 05:27:30.919927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.809 [2024-11-10 05:27:30.919942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:37.809 [2024-11-10 05:27:30.919957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:25:37.809 [2024-11-10 05:27:30.919966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.809 [2024-11-10 05:27:30.943233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.809 [2024-11-10 05:27:30.943300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:37.809 [2024-11-10 05:27:30.943314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.250 ms 00:25:37.809 [2024-11-10 05:27:30.943324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.809 [2024-11-10 05:27:30.951775] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:37.809 [2024-11-10 05:27:30.954858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.809 [2024-11-10 05:27:30.954900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:37.809 [2024-11-10 05:27:30.954921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.480 ms 00:25:37.809 [2024-11-10 05:27:30.954931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.809 [2024-11-10 05:27:30.955023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.809 [2024-11-10 05:27:30.955040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:37.809 [2024-11-10 05:27:30.955057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:37.809 [2024-11-10 05:27:30.955069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.809 [2024-11-10 05:27:30.955831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.809 [2024-11-10 05:27:30.955905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:37.809 [2024-11-10 05:27:30.955920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.723 ms 00:25:37.809 [2024-11-10 05:27:30.955932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.809 [2024-11-10 05:27:30.955963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.809 [2024-11-10 05:27:30.955973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:37.809 [2024-11-10 05:27:30.955981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:37.809 [2024-11-10 05:27:30.956001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.809 [2024-11-10 05:27:30.956039] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:37.809 [2024-11-10 05:27:30.956051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.809 [2024-11-10 05:27:30.956059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:37.809 [2024-11-10 05:27:30.956081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:25:37.809 [2024-11-10 05:27:30.956091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.809 [2024-11-10 05:27:30.961400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.809 [2024-11-10 05:27:30.961448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:37.809 [2024-11-10 05:27:30.961459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.285 ms 00:25:37.809 [2024-11-10 05:27:30.961467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.809 [2024-11-10 05:27:30.961550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.809 [2024-11-10 05:27:30.961560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:37.809 [2024-11-10 05:27:30.961569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:25:37.809 [2024-11-10 05:27:30.961577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.809 [2024-11-10 05:27:30.962738] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 132.985 ms, result 0 00:25:39.197  [2024-11-10T05:27:33.377Z] Copying: 13/1024 [MB] (13 MBps) [2024-11-10T05:27:34.321Z] Copying: 26/1024 [MB] (13 MBps) [2024-11-10T05:27:35.263Z] Copying: 37/1024 [MB] (10 MBps) [2024-11-10T05:27:36.205Z] Copying: 55/1024 [MB] (18 MBps) [2024-11-10T05:27:37.148Z] Copying: 72/1024 [MB] (17 MBps) [2024-11-10T05:27:38.533Z] Copying: 84/1024 [MB] (11 MBps) [2024-11-10T05:27:39.474Z] Copying: 100/1024 [MB] (16 MBps) [2024-11-10T05:27:40.445Z] Copying: 124/1024 [MB] (23 MBps) [2024-11-10T05:27:41.393Z] Copying: 138/1024 [MB] (13 MBps) [2024-11-10T05:27:42.335Z] Copying: 151/1024 [MB] (12 MBps) [2024-11-10T05:27:43.279Z] Copying: 165/1024 [MB] (13 MBps) [2024-11-10T05:27:44.222Z] Copying: 176/1024 [MB] (11 MBps) [2024-11-10T05:27:45.163Z] Copying: 188/1024 [MB] (11 MBps) [2024-11-10T05:27:46.545Z] Copying: 203/1024 [MB] (14 MBps) [2024-11-10T05:27:47.484Z] Copying: 225/1024 [MB] (22 MBps) [2024-11-10T05:27:48.424Z] Copying: 241/1024 [MB] (15 MBps) [2024-11-10T05:27:49.363Z] Copying: 264/1024 [MB] (22 MBps) [2024-11-10T05:27:50.303Z] Copying: 285/1024 [MB] (20 MBps) [2024-11-10T05:27:51.242Z] Copying: 299/1024 [MB] (14 MBps) [2024-11-10T05:27:52.181Z] Copying: 314/1024 [MB] (14 MBps) [2024-11-10T05:27:53.562Z] Copying: 335/1024 [MB] (21 MBps) [2024-11-10T05:27:54.502Z] Copying: 348/1024 [MB] (13 MBps) [2024-11-10T05:27:55.443Z] Copying: 363/1024 [MB] (15 MBps) [2024-11-10T05:27:56.386Z] Copying: 382/1024 [MB] (18 MBps) [2024-11-10T05:27:57.330Z] Copying: 392/1024 [MB] (10 MBps) [2024-11-10T05:27:58.304Z] Copying: 403/1024 [MB] (10 MBps) [2024-11-10T05:27:59.246Z] Copying: 414/1024 [MB] (11 MBps) [2024-11-10T05:28:00.188Z] Copying: 426/1024 [MB] (12 MBps) [2024-11-10T05:28:01.574Z] Copying: 436/1024 [MB] (10 MBps) [2024-11-10T05:28:02.146Z] Copying: 446/1024 [MB] (10 MBps) [2024-11-10T05:28:03.534Z] Copying: 457/1024 [MB] (10 MBps) [2024-11-10T05:28:04.478Z] Copying: 467/1024 [MB] (10 MBps) [2024-11-10T05:28:05.422Z] Copying: 481/1024 [MB] (13 MBps) [2024-11-10T05:28:06.367Z] Copying: 497/1024 [MB] (15 MBps) [2024-11-10T05:28:07.310Z] Copying: 509/1024 [MB] (12 MBps) [2024-11-10T05:28:08.252Z] Copying: 520/1024 [MB] (10 MBps) [2024-11-10T05:28:09.194Z] Copying: 530/1024 [MB] (10 MBps) [2024-11-10T05:28:10.579Z] Copying: 541/1024 [MB] (10 MBps) [2024-11-10T05:28:11.150Z] Copying: 552/1024 [MB] (10 MBps) [2024-11-10T05:28:12.537Z] Copying: 562/1024 [MB] (10 MBps) [2024-11-10T05:28:13.481Z] Copying: 573/1024 [MB] (10 MBps) [2024-11-10T05:28:14.422Z] Copying: 583/1024 [MB] (10 MBps) [2024-11-10T05:28:15.381Z] Copying: 594/1024 [MB] (10 MBps) [2024-11-10T05:28:16.324Z] Copying: 625/1024 [MB] (31 MBps) [2024-11-10T05:28:17.267Z] Copying: 637/1024 [MB] (12 MBps) [2024-11-10T05:28:18.213Z] Copying: 660/1024 [MB] (22 MBps) [2024-11-10T05:28:19.157Z] Copying: 672/1024 [MB] (11 MBps) [2024-11-10T05:28:20.543Z] Copying: 688/1024 [MB] (16 MBps) [2024-11-10T05:28:21.488Z] Copying: 703/1024 [MB] (14 MBps) [2024-11-10T05:28:22.433Z] Copying: 724/1024 [MB] (21 MBps) [2024-11-10T05:28:23.377Z] Copying: 746/1024 [MB] (22 MBps) [2024-11-10T05:28:24.321Z] Copying: 767/1024 [MB] (21 MBps) [2024-11-10T05:28:25.264Z] Copying: 785/1024 [MB] (17 MBps) [2024-11-10T05:28:26.208Z] Copying: 810/1024 [MB] (24 MBps) [2024-11-10T05:28:27.152Z] Copying: 832/1024 [MB] (22 MBps) [2024-11-10T05:28:28.540Z] Copying: 854/1024 [MB] (21 MBps) [2024-11-10T05:28:29.485Z] Copying: 868/1024 [MB] (14 MBps) [2024-11-10T05:28:30.428Z] Copying: 884/1024 [MB] (15 MBps) [2024-11-10T05:28:31.371Z] Copying: 903/1024 [MB] (18 MBps) [2024-11-10T05:28:32.347Z] Copying: 924/1024 [MB] (20 MBps) [2024-11-10T05:28:33.291Z] Copying: 944/1024 [MB] (20 MBps) [2024-11-10T05:28:34.232Z] Copying: 961/1024 [MB] (16 MBps) [2024-11-10T05:28:35.176Z] Copying: 978/1024 [MB] (17 MBps) [2024-11-10T05:28:36.563Z] Copying: 1000/1024 [MB] (21 MBps) [2024-11-10T05:28:36.563Z] Copying: 1016/1024 [MB] (16 MBps) [2024-11-10T05:28:36.825Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-10 05:28:36.617897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.589 [2024-11-10 05:28:36.618008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:43.589 [2024-11-10 05:28:36.618030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:43.589 [2024-11-10 05:28:36.618041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.590 [2024-11-10 05:28:36.618076] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:43.590 [2024-11-10 05:28:36.619113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.590 [2024-11-10 05:28:36.619271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:43.590 [2024-11-10 05:28:36.619284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.016 ms 00:26:43.590 [2024-11-10 05:28:36.619296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.590 [2024-11-10 05:28:36.619581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.590 [2024-11-10 05:28:36.619605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:43.590 [2024-11-10 05:28:36.619617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.257 ms 00:26:43.590 [2024-11-10 05:28:36.619634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.590 [2024-11-10 05:28:36.625167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.590 [2024-11-10 05:28:36.625225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:43.590 [2024-11-10 05:28:36.625239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.512 ms 00:26:43.590 [2024-11-10 05:28:36.625249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.590 [2024-11-10 05:28:36.632092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.590 [2024-11-10 05:28:36.632134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:43.590 [2024-11-10 05:28:36.632146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.814 ms 00:26:43.590 [2024-11-10 05:28:36.632166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.590 [2024-11-10 05:28:36.635519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.590 [2024-11-10 05:28:36.635577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:43.590 [2024-11-10 05:28:36.635588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.283 ms 00:26:43.590 [2024-11-10 05:28:36.635596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.590 [2024-11-10 05:28:36.641170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.590 [2024-11-10 05:28:36.641221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:43.590 [2024-11-10 05:28:36.641233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.523 ms 00:26:43.590 [2024-11-10 05:28:36.641242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.590 [2024-11-10 05:28:36.645622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.590 [2024-11-10 05:28:36.645670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:43.590 [2024-11-10 05:28:36.645683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.328 ms 00:26:43.590 [2024-11-10 05:28:36.645691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.590 [2024-11-10 05:28:36.649403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.590 [2024-11-10 05:28:36.649449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:43.590 [2024-11-10 05:28:36.649460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.693 ms 00:26:43.590 [2024-11-10 05:28:36.649468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.590 [2024-11-10 05:28:36.652285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.590 [2024-11-10 05:28:36.652331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:43.590 [2024-11-10 05:28:36.652342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.753 ms 00:26:43.590 [2024-11-10 05:28:36.652350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.590 [2024-11-10 05:28:36.654218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.590 [2024-11-10 05:28:36.654267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:43.590 [2024-11-10 05:28:36.654278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.820 ms 00:26:43.590 [2024-11-10 05:28:36.654286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.590 [2024-11-10 05:28:36.656034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.590 [2024-11-10 05:28:36.656079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:43.590 [2024-11-10 05:28:36.656089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.669 ms 00:26:43.590 [2024-11-10 05:28:36.656096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.590 [2024-11-10 05:28:36.656139] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:43.590 [2024-11-10 05:28:36.656164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:43.590 [2024-11-10 05:28:36.656176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:26:43.590 [2024-11-10 05:28:36.656186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:43.590 [2024-11-10 05:28:36.656193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:43.590 [2024-11-10 05:28:36.656202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:43.590 [2024-11-10 05:28:36.656210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:43.590 [2024-11-10 05:28:36.656218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:43.590 [2024-11-10 05:28:36.656226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:43.590 [2024-11-10 05:28:36.656234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:43.590 [2024-11-10 05:28:36.656242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:43.590 [2024-11-10 05:28:36.656250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:43.590 [2024-11-10 05:28:36.656258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:43.590 [2024-11-10 05:28:36.656266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:43.590 [2024-11-10 05:28:36.656276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:43.590 [2024-11-10 05:28:36.656285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:43.590 [2024-11-10 05:28:36.656294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:43.590 [2024-11-10 05:28:36.656301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:43.590 [2024-11-10 05:28:36.656310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:43.590 [2024-11-10 05:28:36.656317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:43.590 [2024-11-10 05:28:36.656325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:43.590 [2024-11-10 05:28:36.656333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:43.590 [2024-11-10 05:28:36.656340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:43.590 [2024-11-10 05:28:36.656349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:43.590 [2024-11-10 05:28:36.656357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:43.590 [2024-11-10 05:28:36.656364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:43.590 [2024-11-10 05:28:36.656372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:43.590 [2024-11-10 05:28:36.656379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:43.590 [2024-11-10 05:28:36.656386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:43.590 [2024-11-10 05:28:36.656397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:43.590 [2024-11-10 05:28:36.656407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:43.590 [2024-11-10 05:28:36.656418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:43.590 [2024-11-10 05:28:36.656427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:43.590 [2024-11-10 05:28:36.656435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:43.590 [2024-11-10 05:28:36.656443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:43.590 [2024-11-10 05:28:36.656450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:43.590 [2024-11-10 05:28:36.656457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:43.590 [2024-11-10 05:28:36.656465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:43.590 [2024-11-10 05:28:36.656472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:43.590 [2024-11-10 05:28:36.656480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:43.590 [2024-11-10 05:28:36.656488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:43.590 [2024-11-10 05:28:36.656495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:43.590 [2024-11-10 05:28:36.656503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:43.590 [2024-11-10 05:28:36.656511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:43.590 [2024-11-10 05:28:36.656518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:43.590 [2024-11-10 05:28:36.656525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:43.590 [2024-11-10 05:28:36.656533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:43.590 [2024-11-10 05:28:36.656542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:43.590 [2024-11-10 05:28:36.656552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:43.590 [2024-11-10 05:28:36.656559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:43.590 [2024-11-10 05:28:36.656567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:43.590 [2024-11-10 05:28:36.656575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:43.591 [2024-11-10 05:28:36.656583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:43.591 [2024-11-10 05:28:36.656593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:43.591 [2024-11-10 05:28:36.656601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:43.591 [2024-11-10 05:28:36.656608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:43.591 [2024-11-10 05:28:36.656617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:43.591 [2024-11-10 05:28:36.656624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:43.591 [2024-11-10 05:28:36.656632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:43.591 [2024-11-10 05:28:36.656641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:43.591 [2024-11-10 05:28:36.656649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:43.591 [2024-11-10 05:28:36.656656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:43.591 [2024-11-10 05:28:36.656663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:43.591 [2024-11-10 05:28:36.656674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:43.591 [2024-11-10 05:28:36.656682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:43.591 [2024-11-10 05:28:36.656691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:43.591 [2024-11-10 05:28:36.656699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:43.591 [2024-11-10 05:28:36.656707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:43.591 [2024-11-10 05:28:36.656715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:43.591 [2024-11-10 05:28:36.656723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:43.591 [2024-11-10 05:28:36.656731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:43.591 [2024-11-10 05:28:36.656742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:43.591 [2024-11-10 05:28:36.656751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:43.591 [2024-11-10 05:28:36.656760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:43.591 [2024-11-10 05:28:36.656771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:43.591 [2024-11-10 05:28:36.656778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:43.591 [2024-11-10 05:28:36.656786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:43.591 [2024-11-10 05:28:36.656796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:43.591 [2024-11-10 05:28:36.656805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:43.591 [2024-11-10 05:28:36.656813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:43.591 [2024-11-10 05:28:36.656821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:43.591 [2024-11-10 05:28:36.656830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:43.591 [2024-11-10 05:28:36.656838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:43.591 [2024-11-10 05:28:36.656846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:43.591 [2024-11-10 05:28:36.656855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:43.591 [2024-11-10 05:28:36.656863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:43.591 [2024-11-10 05:28:36.656871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:43.591 [2024-11-10 05:28:36.656880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:43.591 [2024-11-10 05:28:36.656888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:43.591 [2024-11-10 05:28:36.656897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:43.591 [2024-11-10 05:28:36.656908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:43.591 [2024-11-10 05:28:36.656915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:43.591 [2024-11-10 05:28:36.656923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:43.591 [2024-11-10 05:28:36.656933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:43.591 [2024-11-10 05:28:36.656943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:43.591 [2024-11-10 05:28:36.656953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:43.591 [2024-11-10 05:28:36.656962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:43.591 [2024-11-10 05:28:36.656971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:43.591 [2024-11-10 05:28:36.656979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:43.591 [2024-11-10 05:28:36.657003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:43.591 [2024-11-10 05:28:36.657012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:43.591 [2024-11-10 05:28:36.657030] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:43.591 [2024-11-10 05:28:36.657040] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 63757667-739a-4258-a1db-5238ea6e5f8e 00:26:43.591 [2024-11-10 05:28:36.657048] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:26:43.591 [2024-11-10 05:28:36.657056] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:43.591 [2024-11-10 05:28:36.657064] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:43.591 [2024-11-10 05:28:36.657072] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:43.591 [2024-11-10 05:28:36.657088] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:43.591 [2024-11-10 05:28:36.657097] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:43.591 [2024-11-10 05:28:36.657105] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:43.591 [2024-11-10 05:28:36.657111] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:43.591 [2024-11-10 05:28:36.657118] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:43.591 [2024-11-10 05:28:36.657126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.591 [2024-11-10 05:28:36.657135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:43.591 [2024-11-10 05:28:36.657155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.988 ms 00:26:43.591 [2024-11-10 05:28:36.657163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.591 [2024-11-10 05:28:36.660375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.591 [2024-11-10 05:28:36.660415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:43.591 [2024-11-10 05:28:36.660428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.180 ms 00:26:43.591 [2024-11-10 05:28:36.660437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.591 [2024-11-10 05:28:36.660603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.591 [2024-11-10 05:28:36.660615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:43.591 [2024-11-10 05:28:36.660624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:26:43.591 [2024-11-10 05:28:36.660632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.591 [2024-11-10 05:28:36.670008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:43.591 [2024-11-10 05:28:36.670056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:43.591 [2024-11-10 05:28:36.670068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:43.591 [2024-11-10 05:28:36.670077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.591 [2024-11-10 05:28:36.670142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:43.591 [2024-11-10 05:28:36.670152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:43.591 [2024-11-10 05:28:36.670162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:43.591 [2024-11-10 05:28:36.670171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.591 [2024-11-10 05:28:36.670237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:43.591 [2024-11-10 05:28:36.670249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:43.591 [2024-11-10 05:28:36.670260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:43.591 [2024-11-10 05:28:36.670268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.591 [2024-11-10 05:28:36.670285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:43.591 [2024-11-10 05:28:36.670298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:43.591 [2024-11-10 05:28:36.670307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:43.591 [2024-11-10 05:28:36.670315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.591 [2024-11-10 05:28:36.690084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:43.591 [2024-11-10 05:28:36.690140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:43.591 [2024-11-10 05:28:36.690154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:43.591 [2024-11-10 05:28:36.690165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.591 [2024-11-10 05:28:36.706164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:43.591 [2024-11-10 05:28:36.706226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:43.591 [2024-11-10 05:28:36.706240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:43.591 [2024-11-10 05:28:36.706250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.591 [2024-11-10 05:28:36.706316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:43.591 [2024-11-10 05:28:36.706327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:43.591 [2024-11-10 05:28:36.706336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:43.591 [2024-11-10 05:28:36.706346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.591 [2024-11-10 05:28:36.706386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:43.592 [2024-11-10 05:28:36.706397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:43.592 [2024-11-10 05:28:36.706418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:43.592 [2024-11-10 05:28:36.706428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.592 [2024-11-10 05:28:36.706520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:43.592 [2024-11-10 05:28:36.706534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:43.592 [2024-11-10 05:28:36.706544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:43.592 [2024-11-10 05:28:36.706553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.592 [2024-11-10 05:28:36.706586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:43.592 [2024-11-10 05:28:36.706595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:43.592 [2024-11-10 05:28:36.706604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:43.592 [2024-11-10 05:28:36.706618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.592 [2024-11-10 05:28:36.706668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:43.592 [2024-11-10 05:28:36.706687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:43.592 [2024-11-10 05:28:36.706696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:43.592 [2024-11-10 05:28:36.706710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.592 [2024-11-10 05:28:36.706771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:43.592 [2024-11-10 05:28:36.706933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:43.592 [2024-11-10 05:28:36.706948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:43.592 [2024-11-10 05:28:36.706959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.592 [2024-11-10 05:28:36.707179] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 89.243 ms, result 0 00:26:43.852 00:26:43.852 00:26:43.852 05:28:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:26:46.398 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:26:46.398 05:28:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:26:46.398 05:28:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:26:46.398 05:28:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:46.398 05:28:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:46.398 05:28:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:26:46.398 05:28:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:46.398 05:28:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:26:46.398 05:28:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 89066 00:26:46.398 05:28:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@950 -- # '[' -z 89066 ']' 00:26:46.398 05:28:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # kill -0 89066 00:26:46.398 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (89066) - No such process 00:26:46.398 Process with pid 89066 is not found 00:26:46.398 05:28:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@977 -- # echo 'Process with pid 89066 is not found' 00:26:46.398 05:28:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:26:46.659 05:28:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:26:46.659 05:28:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:26:46.659 Remove shared memory files 00:26:46.659 05:28:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:26:46.659 05:28:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:26:46.659 05:28:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:26:46.659 05:28:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:26:46.659 05:28:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:26:46.659 ************************************ 00:26:46.659 END TEST ftl_dirty_shutdown 00:26:46.659 ************************************ 00:26:46.659 00:26:46.659 real 4m23.312s 00:26:46.659 user 4m58.687s 00:26:46.659 sys 0m29.913s 00:26:46.659 05:28:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:46.659 05:28:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:46.921 05:28:39 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:26:46.921 05:28:39 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:26:46.921 05:28:39 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:46.921 05:28:39 ftl -- common/autotest_common.sh@10 -- # set +x 00:26:46.921 ************************************ 00:26:46.921 START TEST ftl_upgrade_shutdown 00:26:46.921 ************************************ 00:26:46.921 05:28:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:26:46.921 * Looking for test storage... 00:26:46.921 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:26:46.921 05:28:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:46.921 05:28:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:26:46.921 05:28:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:46.921 05:28:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:46.921 05:28:40 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:46.921 05:28:40 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:46.921 05:28:40 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:46.921 05:28:40 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:26:46.921 05:28:40 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:26:46.921 05:28:40 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:26:46.921 05:28:40 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:26:46.921 05:28:40 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:26:46.921 05:28:40 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:26:46.921 05:28:40 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:26:46.921 05:28:40 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:46.921 05:28:40 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:26:46.921 05:28:40 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:26:46.921 05:28:40 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:46.921 05:28:40 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:46.921 05:28:40 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:26:46.921 05:28:40 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:26:46.921 05:28:40 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:46.921 05:28:40 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:26:46.921 05:28:40 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:26:46.921 05:28:40 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:26:46.921 05:28:40 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:26:46.921 05:28:40 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:46.921 05:28:40 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:26:46.921 05:28:40 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:26:46.921 05:28:40 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:46.921 05:28:40 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:46.921 05:28:40 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:26:46.921 05:28:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:46.921 05:28:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:46.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:46.921 --rc genhtml_branch_coverage=1 00:26:46.921 --rc genhtml_function_coverage=1 00:26:46.921 --rc genhtml_legend=1 00:26:46.921 --rc geninfo_all_blocks=1 00:26:46.921 --rc geninfo_unexecuted_blocks=1 00:26:46.921 00:26:46.921 ' 00:26:46.921 05:28:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:46.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:46.921 --rc genhtml_branch_coverage=1 00:26:46.921 --rc genhtml_function_coverage=1 00:26:46.921 --rc genhtml_legend=1 00:26:46.921 --rc geninfo_all_blocks=1 00:26:46.921 --rc geninfo_unexecuted_blocks=1 00:26:46.921 00:26:46.921 ' 00:26:46.921 05:28:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:46.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:46.921 --rc genhtml_branch_coverage=1 00:26:46.921 --rc genhtml_function_coverage=1 00:26:46.921 --rc genhtml_legend=1 00:26:46.921 --rc geninfo_all_blocks=1 00:26:46.921 --rc geninfo_unexecuted_blocks=1 00:26:46.921 00:26:46.921 ' 00:26:46.921 05:28:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:46.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:46.921 --rc genhtml_branch_coverage=1 00:26:46.921 --rc genhtml_function_coverage=1 00:26:46.921 --rc genhtml_legend=1 00:26:46.921 --rc geninfo_all_blocks=1 00:26:46.921 --rc geninfo_unexecuted_blocks=1 00:26:46.921 00:26:46.921 ' 00:26:46.921 05:28:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:26:46.921 05:28:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:26:46.921 05:28:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:26:46.921 05:28:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:26:46.921 05:28:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:26:46.921 05:28:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:46.921 05:28:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:46.921 05:28:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:26:46.921 05:28:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:26:46.921 05:28:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:46.921 05:28:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:46.921 05:28:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:26:46.921 05:28:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:26:46.921 05:28:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:46.921 05:28:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:46.921 05:28:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:26:46.921 05:28:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:26:46.921 05:28:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:46.921 05:28:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:46.921 05:28:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:26:46.921 05:28:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:26:46.921 05:28:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:46.921 05:28:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:46.921 05:28:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:46.921 05:28:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:46.921 05:28:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:26:46.921 05:28:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:26:46.921 05:28:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:46.921 05:28:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:46.921 05:28:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:46.921 05:28:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:26:46.921 05:28:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:26:46.921 05:28:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:26:46.922 05:28:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:26:46.922 05:28:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:26:46.922 05:28:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:26:46.922 05:28:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:26:46.922 05:28:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:26:46.922 05:28:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:26:46.922 05:28:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:26:46.922 05:28:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:26:46.922 05:28:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:26:46.922 05:28:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:26:46.922 05:28:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:26:46.922 05:28:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:26:46.922 05:28:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:46.922 05:28:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=91908 00:26:46.922 05:28:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:26:46.922 05:28:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 91908 00:26:46.922 05:28:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 91908 ']' 00:26:46.922 05:28:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:46.922 05:28:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:46.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:46.922 05:28:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:46.922 05:28:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:46.922 05:28:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:46.922 05:28:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:26:47.183 [2024-11-10 05:28:40.189059] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:26:47.183 [2024-11-10 05:28:40.189221] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91908 ] 00:26:47.183 [2024-11-10 05:28:40.342129] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:47.183 [2024-11-10 05:28:40.388946] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:48.126 05:28:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:48.126 05:28:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:26:48.126 05:28:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:48.126 05:28:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:26:48.126 05:28:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:26:48.126 05:28:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:48.126 05:28:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:26:48.126 05:28:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:48.126 05:28:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:26:48.126 05:28:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:48.126 05:28:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:26:48.126 05:28:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:48.126 05:28:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:26:48.126 05:28:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:48.126 05:28:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:26:48.126 05:28:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:48.126 05:28:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:26:48.126 05:28:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:26:48.127 05:28:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:26:48.127 05:28:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:26:48.127 05:28:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:26:48.127 05:28:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:26:48.127 05:28:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:26:48.127 05:28:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:26:48.127 05:28:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:26:48.127 05:28:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:26:48.127 05:28:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=basen1 00:26:48.127 05:28:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:26:48.127 05:28:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:26:48.127 05:28:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:26:48.127 05:28:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:26:48.389 05:28:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:26:48.389 { 00:26:48.389 "name": "basen1", 00:26:48.389 "aliases": [ 00:26:48.389 "642739d0-2979-436e-8646-2f0539e0df05" 00:26:48.389 ], 00:26:48.389 "product_name": "NVMe disk", 00:26:48.389 "block_size": 4096, 00:26:48.389 "num_blocks": 1310720, 00:26:48.389 "uuid": "642739d0-2979-436e-8646-2f0539e0df05", 00:26:48.389 "numa_id": -1, 00:26:48.389 "assigned_rate_limits": { 00:26:48.389 "rw_ios_per_sec": 0, 00:26:48.389 "rw_mbytes_per_sec": 0, 00:26:48.389 "r_mbytes_per_sec": 0, 00:26:48.389 "w_mbytes_per_sec": 0 00:26:48.389 }, 00:26:48.389 "claimed": true, 00:26:48.389 "claim_type": "read_many_write_one", 00:26:48.389 "zoned": false, 00:26:48.389 "supported_io_types": { 00:26:48.389 "read": true, 00:26:48.389 "write": true, 00:26:48.389 "unmap": true, 00:26:48.389 "flush": true, 00:26:48.389 "reset": true, 00:26:48.389 "nvme_admin": true, 00:26:48.389 "nvme_io": true, 00:26:48.389 "nvme_io_md": false, 00:26:48.389 "write_zeroes": true, 00:26:48.389 "zcopy": false, 00:26:48.389 "get_zone_info": false, 00:26:48.389 "zone_management": false, 00:26:48.389 "zone_append": false, 00:26:48.389 "compare": true, 00:26:48.389 "compare_and_write": false, 00:26:48.389 "abort": true, 00:26:48.389 "seek_hole": false, 00:26:48.389 "seek_data": false, 00:26:48.389 "copy": true, 00:26:48.389 "nvme_iov_md": false 00:26:48.389 }, 00:26:48.389 "driver_specific": { 00:26:48.389 "nvme": [ 00:26:48.389 { 00:26:48.389 "pci_address": "0000:00:11.0", 00:26:48.389 "trid": { 00:26:48.389 "trtype": "PCIe", 00:26:48.389 "traddr": "0000:00:11.0" 00:26:48.389 }, 00:26:48.389 "ctrlr_data": { 00:26:48.389 "cntlid": 0, 00:26:48.389 "vendor_id": "0x1b36", 00:26:48.389 "model_number": "QEMU NVMe Ctrl", 00:26:48.389 "serial_number": "12341", 00:26:48.389 "firmware_revision": "8.0.0", 00:26:48.389 "subnqn": "nqn.2019-08.org.qemu:12341", 00:26:48.389 "oacs": { 00:26:48.389 "security": 0, 00:26:48.389 "format": 1, 00:26:48.389 "firmware": 0, 00:26:48.389 "ns_manage": 1 00:26:48.389 }, 00:26:48.389 "multi_ctrlr": false, 00:26:48.389 "ana_reporting": false 00:26:48.389 }, 00:26:48.389 "vs": { 00:26:48.389 "nvme_version": "1.4" 00:26:48.389 }, 00:26:48.389 "ns_data": { 00:26:48.389 "id": 1, 00:26:48.389 "can_share": false 00:26:48.389 } 00:26:48.389 } 00:26:48.389 ], 00:26:48.389 "mp_policy": "active_passive" 00:26:48.389 } 00:26:48.389 } 00:26:48.389 ]' 00:26:48.389 05:28:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:26:48.389 05:28:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:26:48.389 05:28:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:26:48.389 05:28:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:26:48.389 05:28:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:26:48.389 05:28:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:26:48.389 05:28:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:26:48.389 05:28:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:26:48.389 05:28:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:26:48.389 05:28:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:26:48.389 05:28:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:48.651 05:28:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=d3e344e7-d9fb-40f6-91ca-cdd732c760d5 00:26:48.651 05:28:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:26:48.651 05:28:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d3e344e7-d9fb-40f6-91ca-cdd732c760d5 00:26:48.911 05:28:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:26:49.172 05:28:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=39d1bbaa-b691-47ae-80b9-e769a2966711 00:26:49.172 05:28:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 39d1bbaa-b691-47ae-80b9-e769a2966711 00:26:49.436 05:28:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=f0ed15e5-5e78-4b28-bd74-7f61ddab7c91 00:26:49.436 05:28:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z f0ed15e5-5e78-4b28-bd74-7f61ddab7c91 ]] 00:26:49.436 05:28:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 f0ed15e5-5e78-4b28-bd74-7f61ddab7c91 5120 00:26:49.436 05:28:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:26:49.436 05:28:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:26:49.436 05:28:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=f0ed15e5-5e78-4b28-bd74-7f61ddab7c91 00:26:49.436 05:28:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:26:49.436 05:28:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size f0ed15e5-5e78-4b28-bd74-7f61ddab7c91 00:26:49.436 05:28:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=f0ed15e5-5e78-4b28-bd74-7f61ddab7c91 00:26:49.436 05:28:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:26:49.436 05:28:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:26:49.436 05:28:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:26:49.436 05:28:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f0ed15e5-5e78-4b28-bd74-7f61ddab7c91 00:26:49.695 05:28:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:26:49.695 { 00:26:49.695 "name": "f0ed15e5-5e78-4b28-bd74-7f61ddab7c91", 00:26:49.695 "aliases": [ 00:26:49.695 "lvs/basen1p0" 00:26:49.695 ], 00:26:49.695 "product_name": "Logical Volume", 00:26:49.695 "block_size": 4096, 00:26:49.695 "num_blocks": 5242880, 00:26:49.695 "uuid": "f0ed15e5-5e78-4b28-bd74-7f61ddab7c91", 00:26:49.695 "assigned_rate_limits": { 00:26:49.695 "rw_ios_per_sec": 0, 00:26:49.695 "rw_mbytes_per_sec": 0, 00:26:49.695 "r_mbytes_per_sec": 0, 00:26:49.695 "w_mbytes_per_sec": 0 00:26:49.695 }, 00:26:49.695 "claimed": false, 00:26:49.695 "zoned": false, 00:26:49.695 "supported_io_types": { 00:26:49.695 "read": true, 00:26:49.695 "write": true, 00:26:49.695 "unmap": true, 00:26:49.695 "flush": false, 00:26:49.695 "reset": true, 00:26:49.695 "nvme_admin": false, 00:26:49.695 "nvme_io": false, 00:26:49.695 "nvme_io_md": false, 00:26:49.695 "write_zeroes": true, 00:26:49.695 "zcopy": false, 00:26:49.695 "get_zone_info": false, 00:26:49.695 "zone_management": false, 00:26:49.695 "zone_append": false, 00:26:49.695 "compare": false, 00:26:49.695 "compare_and_write": false, 00:26:49.695 "abort": false, 00:26:49.695 "seek_hole": true, 00:26:49.695 "seek_data": true, 00:26:49.695 "copy": false, 00:26:49.695 "nvme_iov_md": false 00:26:49.695 }, 00:26:49.695 "driver_specific": { 00:26:49.695 "lvol": { 00:26:49.695 "lvol_store_uuid": "39d1bbaa-b691-47ae-80b9-e769a2966711", 00:26:49.695 "base_bdev": "basen1", 00:26:49.695 "thin_provision": true, 00:26:49.695 "num_allocated_clusters": 0, 00:26:49.695 "snapshot": false, 00:26:49.695 "clone": false, 00:26:49.695 "esnap_clone": false 00:26:49.695 } 00:26:49.695 } 00:26:49.695 } 00:26:49.695 ]' 00:26:49.695 05:28:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:26:49.695 05:28:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:26:49.695 05:28:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:26:49.695 05:28:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=5242880 00:26:49.695 05:28:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=20480 00:26:49.695 05:28:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 20480 00:26:49.695 05:28:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:26:49.695 05:28:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:26:49.695 05:28:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:26:49.953 05:28:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:26:49.953 05:28:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:26:49.953 05:28:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:26:50.210 05:28:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:26:50.210 05:28:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:26:50.210 05:28:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d f0ed15e5-5e78-4b28-bd74-7f61ddab7c91 -c cachen1p0 --l2p_dram_limit 2 00:26:50.471 [2024-11-10 05:28:43.448776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.471 [2024-11-10 05:28:43.448821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:26:50.471 [2024-11-10 05:28:43.448834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:50.471 [2024-11-10 05:28:43.448843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.471 [2024-11-10 05:28:43.448883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.471 [2024-11-10 05:28:43.448894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:50.471 [2024-11-10 05:28:43.448901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:26:50.471 [2024-11-10 05:28:43.448911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.471 [2024-11-10 05:28:43.448929] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:26:50.471 [2024-11-10 05:28:43.449144] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:26:50.471 [2024-11-10 05:28:43.449158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.471 [2024-11-10 05:28:43.449167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:50.471 [2024-11-10 05:28:43.449176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.233 ms 00:26:50.471 [2024-11-10 05:28:43.449184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.471 [2024-11-10 05:28:43.449207] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 786d8882-3dd7-4f51-b235-b2b44e9b80fd 00:26:50.471 [2024-11-10 05:28:43.450465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.471 [2024-11-10 05:28:43.450497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:26:50.471 [2024-11-10 05:28:43.450507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:26:50.471 [2024-11-10 05:28:43.450515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.471 [2024-11-10 05:28:43.457433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.471 [2024-11-10 05:28:43.457459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:50.471 [2024-11-10 05:28:43.457470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.877 ms 00:26:50.471 [2024-11-10 05:28:43.457477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.471 [2024-11-10 05:28:43.457545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.471 [2024-11-10 05:28:43.457553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:50.471 [2024-11-10 05:28:43.457562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:26:50.472 [2024-11-10 05:28:43.457570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.472 [2024-11-10 05:28:43.457610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.472 [2024-11-10 05:28:43.457621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:26:50.472 [2024-11-10 05:28:43.457630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:26:50.472 [2024-11-10 05:28:43.457637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.472 [2024-11-10 05:28:43.457656] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:26:50.472 [2024-11-10 05:28:43.459289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.472 [2024-11-10 05:28:43.459313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:50.472 [2024-11-10 05:28:43.459323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.638 ms 00:26:50.472 [2024-11-10 05:28:43.459334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.472 [2024-11-10 05:28:43.459355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.472 [2024-11-10 05:28:43.459364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:26:50.472 [2024-11-10 05:28:43.459374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:26:50.472 [2024-11-10 05:28:43.459384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.472 [2024-11-10 05:28:43.459397] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:26:50.472 [2024-11-10 05:28:43.459508] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:26:50.472 [2024-11-10 05:28:43.459519] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:26:50.472 [2024-11-10 05:28:43.459532] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:26:50.472 [2024-11-10 05:28:43.459542] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:26:50.472 [2024-11-10 05:28:43.459558] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:26:50.472 [2024-11-10 05:28:43.459565] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:26:50.472 [2024-11-10 05:28:43.459576] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:26:50.472 [2024-11-10 05:28:43.459583] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:26:50.472 [2024-11-10 05:28:43.459591] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:26:50.472 [2024-11-10 05:28:43.459599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.472 [2024-11-10 05:28:43.459607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:26:50.472 [2024-11-10 05:28:43.459614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.203 ms 00:26:50.472 [2024-11-10 05:28:43.459622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.472 [2024-11-10 05:28:43.459691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.472 [2024-11-10 05:28:43.459702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:26:50.472 [2024-11-10 05:28:43.459709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:26:50.472 [2024-11-10 05:28:43.459716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.472 [2024-11-10 05:28:43.459789] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:26:50.472 [2024-11-10 05:28:43.459801] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:26:50.472 [2024-11-10 05:28:43.459809] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:50.472 [2024-11-10 05:28:43.459817] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:50.472 [2024-11-10 05:28:43.459823] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:26:50.472 [2024-11-10 05:28:43.459832] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:26:50.472 [2024-11-10 05:28:43.459839] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:26:50.472 [2024-11-10 05:28:43.459847] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:26:50.472 [2024-11-10 05:28:43.459853] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:26:50.472 [2024-11-10 05:28:43.459861] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:50.472 [2024-11-10 05:28:43.459866] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:26:50.472 [2024-11-10 05:28:43.459876] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:26:50.472 [2024-11-10 05:28:43.459882] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:50.472 [2024-11-10 05:28:43.459891] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:26:50.472 [2024-11-10 05:28:43.459897] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:26:50.472 [2024-11-10 05:28:43.459904] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:50.472 [2024-11-10 05:28:43.459911] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:26:50.472 [2024-11-10 05:28:43.459918] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:26:50.472 [2024-11-10 05:28:43.459924] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:50.472 [2024-11-10 05:28:43.459932] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:26:50.472 [2024-11-10 05:28:43.459939] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:26:50.472 [2024-11-10 05:28:43.459948] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:50.472 [2024-11-10 05:28:43.459955] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:26:50.472 [2024-11-10 05:28:43.459970] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:26:50.472 [2024-11-10 05:28:43.459976] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:50.472 [2024-11-10 05:28:43.460005] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:26:50.472 [2024-11-10 05:28:43.460013] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:26:50.472 [2024-11-10 05:28:43.460021] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:50.472 [2024-11-10 05:28:43.460027] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:26:50.472 [2024-11-10 05:28:43.460038] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:26:50.472 [2024-11-10 05:28:43.460044] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:50.472 [2024-11-10 05:28:43.460053] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:26:50.472 [2024-11-10 05:28:43.460059] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:26:50.472 [2024-11-10 05:28:43.460084] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:50.472 [2024-11-10 05:28:43.460091] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:26:50.472 [2024-11-10 05:28:43.460100] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:26:50.472 [2024-11-10 05:28:43.460106] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:50.472 [2024-11-10 05:28:43.460114] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:26:50.472 [2024-11-10 05:28:43.460121] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:26:50.472 [2024-11-10 05:28:43.460135] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:50.472 [2024-11-10 05:28:43.460141] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:26:50.472 [2024-11-10 05:28:43.460150] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:26:50.472 [2024-11-10 05:28:43.460156] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:50.472 [2024-11-10 05:28:43.460164] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:26:50.472 [2024-11-10 05:28:43.460172] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:26:50.472 [2024-11-10 05:28:43.460182] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:50.472 [2024-11-10 05:28:43.460189] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:50.472 [2024-11-10 05:28:43.460198] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:26:50.472 [2024-11-10 05:28:43.460205] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:26:50.472 [2024-11-10 05:28:43.460213] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:26:50.472 [2024-11-10 05:28:43.460219] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:26:50.472 [2024-11-10 05:28:43.460227] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:26:50.472 [2024-11-10 05:28:43.460237] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:26:50.472 [2024-11-10 05:28:43.460249] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:26:50.472 [2024-11-10 05:28:43.460258] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:50.472 [2024-11-10 05:28:43.460267] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:26:50.472 [2024-11-10 05:28:43.460275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:26:50.472 [2024-11-10 05:28:43.460284] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:26:50.472 [2024-11-10 05:28:43.460290] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:26:50.472 [2024-11-10 05:28:43.460300] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:26:50.472 [2024-11-10 05:28:43.460307] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:26:50.473 [2024-11-10 05:28:43.460316] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:26:50.473 [2024-11-10 05:28:43.460323] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:26:50.473 [2024-11-10 05:28:43.460331] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:26:50.473 [2024-11-10 05:28:43.460337] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:26:50.473 [2024-11-10 05:28:43.460344] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:26:50.473 [2024-11-10 05:28:43.460351] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:26:50.473 [2024-11-10 05:28:43.460358] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:26:50.473 [2024-11-10 05:28:43.460364] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:26:50.473 [2024-11-10 05:28:43.460371] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:26:50.473 [2024-11-10 05:28:43.460379] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:50.473 [2024-11-10 05:28:43.460387] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:50.473 [2024-11-10 05:28:43.460393] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:26:50.473 [2024-11-10 05:28:43.460401] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:26:50.473 [2024-11-10 05:28:43.460407] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:26:50.473 [2024-11-10 05:28:43.460415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.473 [2024-11-10 05:28:43.460421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:26:50.473 [2024-11-10 05:28:43.460432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.675 ms 00:26:50.473 [2024-11-10 05:28:43.460439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.473 [2024-11-10 05:28:43.460469] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:26:50.473 [2024-11-10 05:28:43.460482] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:26:53.767 [2024-11-10 05:28:46.886212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:53.767 [2024-11-10 05:28:46.886311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:26:53.767 [2024-11-10 05:28:46.886339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3425.723 ms 00:26:53.767 [2024-11-10 05:28:46.886351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:53.767 [2024-11-10 05:28:46.904927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:53.767 [2024-11-10 05:28:46.905024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:53.767 [2024-11-10 05:28:46.905046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.429 ms 00:26:53.767 [2024-11-10 05:28:46.905057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:53.767 [2024-11-10 05:28:46.905136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:53.767 [2024-11-10 05:28:46.905147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:26:53.767 [2024-11-10 05:28:46.905165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:26:53.767 [2024-11-10 05:28:46.905176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:53.767 [2024-11-10 05:28:46.920984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:53.767 [2024-11-10 05:28:46.921066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:53.767 [2024-11-10 05:28:46.921094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.743 ms 00:26:53.767 [2024-11-10 05:28:46.921104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:53.767 [2024-11-10 05:28:46.921143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:53.767 [2024-11-10 05:28:46.921158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:53.767 [2024-11-10 05:28:46.921173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:53.767 [2024-11-10 05:28:46.921182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:53.767 [2024-11-10 05:28:46.921886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:53.767 [2024-11-10 05:28:46.921924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:53.767 [2024-11-10 05:28:46.921941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.643 ms 00:26:53.767 [2024-11-10 05:28:46.921953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:53.767 [2024-11-10 05:28:46.922035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:53.767 [2024-11-10 05:28:46.922047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:53.767 [2024-11-10 05:28:46.922065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.049 ms 00:26:53.767 [2024-11-10 05:28:46.922075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:53.767 [2024-11-10 05:28:46.953488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:53.767 [2024-11-10 05:28:46.953557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:53.767 [2024-11-10 05:28:46.953583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.379 ms 00:26:53.767 [2024-11-10 05:28:46.953601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:53.767 [2024-11-10 05:28:46.965416] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:26:53.767 [2024-11-10 05:28:46.967049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:53.767 [2024-11-10 05:28:46.967102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:26:53.767 [2024-11-10 05:28:46.967116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.312 ms 00:26:53.767 [2024-11-10 05:28:46.967130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:53.767 [2024-11-10 05:28:46.990351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:53.767 [2024-11-10 05:28:46.990415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:26:53.767 [2024-11-10 05:28:46.990429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.187 ms 00:26:53.767 [2024-11-10 05:28:46.990445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:53.767 [2024-11-10 05:28:46.990564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:53.767 [2024-11-10 05:28:46.990581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:26:53.767 [2024-11-10 05:28:46.990592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.064 ms 00:26:53.767 [2024-11-10 05:28:46.990605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:53.767 [2024-11-10 05:28:46.995773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:53.767 [2024-11-10 05:28:46.995827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:26:53.767 [2024-11-10 05:28:46.995846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.139 ms 00:26:53.767 [2024-11-10 05:28:46.995859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.026 [2024-11-10 05:28:47.000925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.026 [2024-11-10 05:28:47.000977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:26:54.026 [2024-11-10 05:28:47.001004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.012 ms 00:26:54.026 [2024-11-10 05:28:47.001016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.026 [2024-11-10 05:28:47.001361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.026 [2024-11-10 05:28:47.001376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:26:54.026 [2024-11-10 05:28:47.001390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.298 ms 00:26:54.027 [2024-11-10 05:28:47.001405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.027 [2024-11-10 05:28:47.048848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.027 [2024-11-10 05:28:47.048903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:26:54.027 [2024-11-10 05:28:47.048917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 47.415 ms 00:26:54.027 [2024-11-10 05:28:47.048931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.027 [2024-11-10 05:28:47.056680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.027 [2024-11-10 05:28:47.056733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:26:54.027 [2024-11-10 05:28:47.056746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.671 ms 00:26:54.027 [2024-11-10 05:28:47.056759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.027 [2024-11-10 05:28:47.062658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.027 [2024-11-10 05:28:47.062708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:26:54.027 [2024-11-10 05:28:47.062720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.868 ms 00:26:54.027 [2024-11-10 05:28:47.062731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.027 [2024-11-10 05:28:47.068953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.027 [2024-11-10 05:28:47.069023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:26:54.027 [2024-11-10 05:28:47.069036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.193 ms 00:26:54.027 [2024-11-10 05:28:47.069051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.027 [2024-11-10 05:28:47.069086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.027 [2024-11-10 05:28:47.069099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:26:54.027 [2024-11-10 05:28:47.069111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:26:54.027 [2024-11-10 05:28:47.069123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.027 [2024-11-10 05:28:47.069210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.027 [2024-11-10 05:28:47.069225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:26:54.027 [2024-11-10 05:28:47.069235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.042 ms 00:26:54.027 [2024-11-10 05:28:47.069252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.027 [2024-11-10 05:28:47.070644] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3621.292 ms, result 0 00:26:54.027 { 00:26:54.027 "name": "ftl", 00:26:54.027 "uuid": "786d8882-3dd7-4f51-b235-b2b44e9b80fd" 00:26:54.027 } 00:26:54.027 05:28:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:26:54.286 [2024-11-10 05:28:47.292226] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:54.286 05:28:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:26:54.547 05:28:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:26:54.548 [2024-11-10 05:28:47.716648] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:26:54.548 05:28:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:26:54.805 [2024-11-10 05:28:47.933065] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:54.806 05:28:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:26:55.063 05:28:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:26:55.063 05:28:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:26:55.063 05:28:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:26:55.063 05:28:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:26:55.063 05:28:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:26:55.063 05:28:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:26:55.063 05:28:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:26:55.063 05:28:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:26:55.063 05:28:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:26:55.063 05:28:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:26:55.063 Fill FTL, iteration 1 00:26:55.063 05:28:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:26:55.063 05:28:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:26:55.063 05:28:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:55.063 05:28:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:55.063 05:28:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:55.063 05:28:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:26:55.063 05:28:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=92030 00:26:55.063 05:28:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:26:55.063 05:28:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 92030 /var/tmp/spdk.tgt.sock 00:26:55.063 05:28:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 92030 ']' 00:26:55.063 05:28:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:26:55.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:26:55.063 05:28:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:55.063 05:28:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:26:55.063 05:28:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:26:55.063 05:28:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:55.063 05:28:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:55.321 [2024-11-10 05:28:48.351699] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:26:55.321 [2024-11-10 05:28:48.351821] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92030 ] 00:26:55.321 [2024-11-10 05:28:48.500070] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:55.321 [2024-11-10 05:28:48.532780] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:26:56.264 05:28:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:56.264 05:28:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:26:56.264 05:28:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:26:56.264 ftln1 00:26:56.264 05:28:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:26:56.264 05:28:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:26:56.611 05:28:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:26:56.611 05:28:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 92030 00:26:56.611 05:28:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 92030 ']' 00:26:56.611 05:28:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 92030 00:26:56.611 05:28:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:26:56.611 05:28:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:56.611 05:28:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92030 00:26:56.611 05:28:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:56.611 05:28:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:56.611 killing process with pid 92030 00:26:56.611 05:28:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92030' 00:26:56.611 05:28:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 92030 00:26:56.611 05:28:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 92030 00:26:56.868 05:28:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:26:56.868 05:28:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:26:56.868 [2024-11-10 05:28:50.009847] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:26:56.868 [2024-11-10 05:28:50.009963] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92061 ] 00:26:57.126 [2024-11-10 05:28:50.159436] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:57.126 [2024-11-10 05:28:50.192255] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:26:58.510  [2024-11-10T05:28:52.689Z] Copying: 178/1024 [MB] (178 MBps) [2024-11-10T05:28:53.632Z] Copying: 379/1024 [MB] (201 MBps) [2024-11-10T05:28:54.574Z] Copying: 629/1024 [MB] (250 MBps) [2024-11-10T05:28:55.145Z] Copying: 890/1024 [MB] (261 MBps) [2024-11-10T05:28:55.145Z] Copying: 1024/1024 [MB] (average 226 MBps) 00:27:01.909 00:27:01.909 Calculate MD5 checksum, iteration 1 00:27:01.909 05:28:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:27:01.909 05:28:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:27:01.909 05:28:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:01.909 05:28:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:01.909 05:28:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:01.909 05:28:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:01.909 05:28:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:01.909 05:28:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:01.909 [2024-11-10 05:28:55.128024] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:27:01.909 [2024-11-10 05:28:55.128139] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92119 ] 00:27:02.171 [2024-11-10 05:28:55.276475] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:02.171 [2024-11-10 05:28:55.326478] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:03.559  [2024-11-10T05:28:57.747Z] Copying: 526/1024 [MB] (526 MBps) [2024-11-10T05:28:57.747Z] Copying: 1024/1024 [MB] (average 533 MBps) 00:27:04.511 00:27:04.511 05:28:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:27:04.511 05:28:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:07.053 05:28:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:27:07.053 05:28:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=3846eae18d6f1e10bc1830efa31eef5b 00:27:07.053 05:28:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:27:07.053 Fill FTL, iteration 2 00:27:07.053 05:28:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:07.053 05:28:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:27:07.053 05:28:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:27:07.053 05:28:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:07.053 05:28:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:07.053 05:28:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:07.053 05:28:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:07.053 05:28:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:27:07.053 [2024-11-10 05:28:59.903542] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:27:07.053 [2024-11-10 05:28:59.903663] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92169 ] 00:27:07.053 [2024-11-10 05:29:00.049822] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:07.053 [2024-11-10 05:29:00.084699] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:08.438  [2024-11-10T05:29:02.617Z] Copying: 186/1024 [MB] (186 MBps) [2024-11-10T05:29:03.559Z] Copying: 375/1024 [MB] (189 MBps) [2024-11-10T05:29:04.501Z] Copying: 637/1024 [MB] (262 MBps) [2024-11-10T05:29:05.444Z] Copying: 854/1024 [MB] (217 MBps) [2024-11-10T05:29:05.705Z] Copying: 1024/1024 [MB] (average 205 MBps) 00:27:12.469 00:27:12.469 Calculate MD5 checksum, iteration 2 00:27:12.469 05:29:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:27:12.469 05:29:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:27:12.469 05:29:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:12.469 05:29:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:12.469 05:29:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:12.469 05:29:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:12.469 05:29:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:12.469 05:29:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:12.469 [2024-11-10 05:29:05.646219] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:27:12.469 [2024-11-10 05:29:05.646890] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92229 ] 00:27:12.730 [2024-11-10 05:29:05.796537] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:12.730 [2024-11-10 05:29:05.879510] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:14.115  [2024-11-10T05:29:08.287Z] Copying: 608/1024 [MB] (608 MBps) [2024-11-10T05:29:08.856Z] Copying: 1024/1024 [MB] (average 609 MBps) 00:27:15.620 00:27:15.620 05:29:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:27:15.620 05:29:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:17.535 05:29:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:27:17.535 05:29:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=a0f5e7b9c2e418aa85757fa04160fd0f 00:27:17.535 05:29:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:27:17.535 05:29:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:17.535 05:29:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:17.535 [2024-11-10 05:29:10.430061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:17.535 [2024-11-10 05:29:10.430103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:17.535 [2024-11-10 05:29:10.430116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:17.535 [2024-11-10 05:29:10.430123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:17.535 [2024-11-10 05:29:10.430143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:17.535 [2024-11-10 05:29:10.430150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:17.535 [2024-11-10 05:29:10.430160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:17.535 [2024-11-10 05:29:10.430167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:17.535 [2024-11-10 05:29:10.430183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:17.535 [2024-11-10 05:29:10.430194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:17.535 [2024-11-10 05:29:10.430201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:17.535 [2024-11-10 05:29:10.430208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:17.535 [2024-11-10 05:29:10.430258] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.187 ms, result 0 00:27:17.535 true 00:27:17.535 05:29:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:17.535 { 00:27:17.535 "name": "ftl", 00:27:17.535 "properties": [ 00:27:17.535 { 00:27:17.535 "name": "superblock_version", 00:27:17.535 "value": 5, 00:27:17.535 "read-only": true 00:27:17.535 }, 00:27:17.535 { 00:27:17.535 "name": "base_device", 00:27:17.535 "bands": [ 00:27:17.535 { 00:27:17.535 "id": 0, 00:27:17.535 "state": "FREE", 00:27:17.535 "validity": 0.0 00:27:17.535 }, 00:27:17.535 { 00:27:17.535 "id": 1, 00:27:17.535 "state": "FREE", 00:27:17.535 "validity": 0.0 00:27:17.535 }, 00:27:17.535 { 00:27:17.535 "id": 2, 00:27:17.535 "state": "FREE", 00:27:17.535 "validity": 0.0 00:27:17.535 }, 00:27:17.535 { 00:27:17.535 "id": 3, 00:27:17.535 "state": "FREE", 00:27:17.535 "validity": 0.0 00:27:17.535 }, 00:27:17.535 { 00:27:17.535 "id": 4, 00:27:17.535 "state": "FREE", 00:27:17.535 "validity": 0.0 00:27:17.535 }, 00:27:17.536 { 00:27:17.536 "id": 5, 00:27:17.536 "state": "FREE", 00:27:17.536 "validity": 0.0 00:27:17.536 }, 00:27:17.536 { 00:27:17.536 "id": 6, 00:27:17.536 "state": "FREE", 00:27:17.536 "validity": 0.0 00:27:17.536 }, 00:27:17.536 { 00:27:17.536 "id": 7, 00:27:17.536 "state": "FREE", 00:27:17.536 "validity": 0.0 00:27:17.536 }, 00:27:17.536 { 00:27:17.536 "id": 8, 00:27:17.536 "state": "FREE", 00:27:17.536 "validity": 0.0 00:27:17.536 }, 00:27:17.536 { 00:27:17.536 "id": 9, 00:27:17.536 "state": "FREE", 00:27:17.536 "validity": 0.0 00:27:17.536 }, 00:27:17.536 { 00:27:17.536 "id": 10, 00:27:17.536 "state": "FREE", 00:27:17.536 "validity": 0.0 00:27:17.536 }, 00:27:17.536 { 00:27:17.536 "id": 11, 00:27:17.536 "state": "FREE", 00:27:17.536 "validity": 0.0 00:27:17.536 }, 00:27:17.536 { 00:27:17.536 "id": 12, 00:27:17.536 "state": "FREE", 00:27:17.536 "validity": 0.0 00:27:17.536 }, 00:27:17.536 { 00:27:17.536 "id": 13, 00:27:17.536 "state": "FREE", 00:27:17.536 "validity": 0.0 00:27:17.536 }, 00:27:17.536 { 00:27:17.536 "id": 14, 00:27:17.536 "state": "FREE", 00:27:17.536 "validity": 0.0 00:27:17.536 }, 00:27:17.536 { 00:27:17.536 "id": 15, 00:27:17.536 "state": "FREE", 00:27:17.536 "validity": 0.0 00:27:17.536 }, 00:27:17.536 { 00:27:17.536 "id": 16, 00:27:17.536 "state": "FREE", 00:27:17.536 "validity": 0.0 00:27:17.536 }, 00:27:17.536 { 00:27:17.536 "id": 17, 00:27:17.536 "state": "FREE", 00:27:17.536 "validity": 0.0 00:27:17.536 } 00:27:17.536 ], 00:27:17.536 "read-only": true 00:27:17.536 }, 00:27:17.536 { 00:27:17.536 "name": "cache_device", 00:27:17.536 "type": "bdev", 00:27:17.536 "chunks": [ 00:27:17.536 { 00:27:17.536 "id": 0, 00:27:17.536 "state": "INACTIVE", 00:27:17.536 "utilization": 0.0 00:27:17.536 }, 00:27:17.536 { 00:27:17.536 "id": 1, 00:27:17.536 "state": "CLOSED", 00:27:17.536 "utilization": 1.0 00:27:17.536 }, 00:27:17.536 { 00:27:17.536 "id": 2, 00:27:17.536 "state": "CLOSED", 00:27:17.536 "utilization": 1.0 00:27:17.536 }, 00:27:17.536 { 00:27:17.536 "id": 3, 00:27:17.536 "state": "OPEN", 00:27:17.536 "utilization": 0.001953125 00:27:17.536 }, 00:27:17.536 { 00:27:17.536 "id": 4, 00:27:17.536 "state": "OPEN", 00:27:17.536 "utilization": 0.0 00:27:17.536 } 00:27:17.536 ], 00:27:17.536 "read-only": true 00:27:17.536 }, 00:27:17.536 { 00:27:17.536 "name": "verbose_mode", 00:27:17.536 "value": true, 00:27:17.536 "unit": "", 00:27:17.536 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:17.536 }, 00:27:17.536 { 00:27:17.536 "name": "prep_upgrade_on_shutdown", 00:27:17.536 "value": false, 00:27:17.536 "unit": "", 00:27:17.536 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:17.536 } 00:27:17.536 ] 00:27:17.536 } 00:27:17.536 05:29:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:27:17.797 [2024-11-10 05:29:10.852470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:17.797 [2024-11-10 05:29:10.852502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:17.797 [2024-11-10 05:29:10.852513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:17.797 [2024-11-10 05:29:10.852519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:17.797 [2024-11-10 05:29:10.852536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:17.797 [2024-11-10 05:29:10.852543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:17.797 [2024-11-10 05:29:10.852550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:17.797 [2024-11-10 05:29:10.852556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:17.797 [2024-11-10 05:29:10.852572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:17.797 [2024-11-10 05:29:10.852578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:17.797 [2024-11-10 05:29:10.852584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:17.797 [2024-11-10 05:29:10.852590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:17.797 [2024-11-10 05:29:10.852633] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.155 ms, result 0 00:27:17.797 true 00:27:17.797 05:29:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:27:17.797 05:29:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:27:17.797 05:29:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:18.056 05:29:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:27:18.056 05:29:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:27:18.056 05:29:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:18.056 [2024-11-10 05:29:11.264824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.056 [2024-11-10 05:29:11.264855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:18.056 [2024-11-10 05:29:11.264864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:18.056 [2024-11-10 05:29:11.264871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.056 [2024-11-10 05:29:11.264888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.056 [2024-11-10 05:29:11.264895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:18.056 [2024-11-10 05:29:11.264902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:18.056 [2024-11-10 05:29:11.264908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.056 [2024-11-10 05:29:11.264923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.056 [2024-11-10 05:29:11.264930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:18.056 [2024-11-10 05:29:11.264936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:18.056 [2024-11-10 05:29:11.264942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.056 [2024-11-10 05:29:11.264986] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.151 ms, result 0 00:27:18.056 true 00:27:18.056 05:29:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:18.317 { 00:27:18.317 "name": "ftl", 00:27:18.317 "properties": [ 00:27:18.317 { 00:27:18.317 "name": "superblock_version", 00:27:18.317 "value": 5, 00:27:18.317 "read-only": true 00:27:18.317 }, 00:27:18.317 { 00:27:18.317 "name": "base_device", 00:27:18.317 "bands": [ 00:27:18.317 { 00:27:18.317 "id": 0, 00:27:18.317 "state": "FREE", 00:27:18.317 "validity": 0.0 00:27:18.317 }, 00:27:18.317 { 00:27:18.317 "id": 1, 00:27:18.317 "state": "FREE", 00:27:18.317 "validity": 0.0 00:27:18.317 }, 00:27:18.317 { 00:27:18.317 "id": 2, 00:27:18.317 "state": "FREE", 00:27:18.317 "validity": 0.0 00:27:18.317 }, 00:27:18.317 { 00:27:18.317 "id": 3, 00:27:18.317 "state": "FREE", 00:27:18.317 "validity": 0.0 00:27:18.317 }, 00:27:18.317 { 00:27:18.317 "id": 4, 00:27:18.317 "state": "FREE", 00:27:18.317 "validity": 0.0 00:27:18.317 }, 00:27:18.317 { 00:27:18.317 "id": 5, 00:27:18.317 "state": "FREE", 00:27:18.317 "validity": 0.0 00:27:18.317 }, 00:27:18.317 { 00:27:18.317 "id": 6, 00:27:18.317 "state": "FREE", 00:27:18.317 "validity": 0.0 00:27:18.317 }, 00:27:18.317 { 00:27:18.317 "id": 7, 00:27:18.317 "state": "FREE", 00:27:18.317 "validity": 0.0 00:27:18.317 }, 00:27:18.317 { 00:27:18.317 "id": 8, 00:27:18.317 "state": "FREE", 00:27:18.317 "validity": 0.0 00:27:18.317 }, 00:27:18.317 { 00:27:18.317 "id": 9, 00:27:18.317 "state": "FREE", 00:27:18.317 "validity": 0.0 00:27:18.317 }, 00:27:18.317 { 00:27:18.317 "id": 10, 00:27:18.317 "state": "FREE", 00:27:18.317 "validity": 0.0 00:27:18.317 }, 00:27:18.317 { 00:27:18.317 "id": 11, 00:27:18.317 "state": "FREE", 00:27:18.317 "validity": 0.0 00:27:18.317 }, 00:27:18.317 { 00:27:18.317 "id": 12, 00:27:18.317 "state": "FREE", 00:27:18.317 "validity": 0.0 00:27:18.317 }, 00:27:18.317 { 00:27:18.317 "id": 13, 00:27:18.317 "state": "FREE", 00:27:18.317 "validity": 0.0 00:27:18.317 }, 00:27:18.317 { 00:27:18.317 "id": 14, 00:27:18.317 "state": "FREE", 00:27:18.317 "validity": 0.0 00:27:18.317 }, 00:27:18.317 { 00:27:18.317 "id": 15, 00:27:18.317 "state": "FREE", 00:27:18.317 "validity": 0.0 00:27:18.317 }, 00:27:18.317 { 00:27:18.317 "id": 16, 00:27:18.317 "state": "FREE", 00:27:18.317 "validity": 0.0 00:27:18.317 }, 00:27:18.317 { 00:27:18.317 "id": 17, 00:27:18.317 "state": "FREE", 00:27:18.317 "validity": 0.0 00:27:18.317 } 00:27:18.317 ], 00:27:18.317 "read-only": true 00:27:18.317 }, 00:27:18.317 { 00:27:18.317 "name": "cache_device", 00:27:18.317 "type": "bdev", 00:27:18.317 "chunks": [ 00:27:18.317 { 00:27:18.317 "id": 0, 00:27:18.317 "state": "INACTIVE", 00:27:18.317 "utilization": 0.0 00:27:18.317 }, 00:27:18.317 { 00:27:18.317 "id": 1, 00:27:18.317 "state": "CLOSED", 00:27:18.317 "utilization": 1.0 00:27:18.317 }, 00:27:18.317 { 00:27:18.317 "id": 2, 00:27:18.317 "state": "CLOSED", 00:27:18.317 "utilization": 1.0 00:27:18.317 }, 00:27:18.317 { 00:27:18.317 "id": 3, 00:27:18.317 "state": "OPEN", 00:27:18.317 "utilization": 0.001953125 00:27:18.317 }, 00:27:18.317 { 00:27:18.317 "id": 4, 00:27:18.317 "state": "OPEN", 00:27:18.317 "utilization": 0.0 00:27:18.317 } 00:27:18.317 ], 00:27:18.317 "read-only": true 00:27:18.317 }, 00:27:18.317 { 00:27:18.317 "name": "verbose_mode", 00:27:18.317 "value": true, 00:27:18.317 "unit": "", 00:27:18.317 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:18.317 }, 00:27:18.317 { 00:27:18.317 "name": "prep_upgrade_on_shutdown", 00:27:18.317 "value": true, 00:27:18.317 "unit": "", 00:27:18.317 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:18.317 } 00:27:18.317 ] 00:27:18.317 } 00:27:18.317 05:29:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:27:18.317 05:29:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 91908 ]] 00:27:18.317 05:29:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 91908 00:27:18.317 05:29:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 91908 ']' 00:27:18.317 05:29:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 91908 00:27:18.317 05:29:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:27:18.317 05:29:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:18.317 05:29:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91908 00:27:18.317 05:29:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:18.317 killing process with pid 91908 00:27:18.317 05:29:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:18.317 05:29:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91908' 00:27:18.317 05:29:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 91908 00:27:18.317 05:29:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 91908 00:27:18.578 [2024-11-10 05:29:11.584393] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:27:18.578 [2024-11-10 05:29:11.587311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.578 [2024-11-10 05:29:11.587338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:27:18.578 [2024-11-10 05:29:11.587349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:18.578 [2024-11-10 05:29:11.587355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.578 [2024-11-10 05:29:11.587373] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:27:18.578 [2024-11-10 05:29:11.587750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.578 [2024-11-10 05:29:11.587768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:27:18.578 [2024-11-10 05:29:11.587776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.367 ms 00:27:18.578 [2024-11-10 05:29:11.587782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.719 [2024-11-10 05:29:19.665309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.719 [2024-11-10 05:29:19.665350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:27:26.719 [2024-11-10 05:29:19.665365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8077.481 ms 00:27:26.719 [2024-11-10 05:29:19.665373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.719 [2024-11-10 05:29:19.666886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.719 [2024-11-10 05:29:19.667319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:27:26.719 [2024-11-10 05:29:19.667373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.491 ms 00:27:26.719 [2024-11-10 05:29:19.667401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.719 [2024-11-10 05:29:19.671171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.719 [2024-11-10 05:29:19.671245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:27:26.719 [2024-11-10 05:29:19.671275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.679 ms 00:27:26.719 [2024-11-10 05:29:19.671318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.719 [2024-11-10 05:29:19.674592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.719 [2024-11-10 05:29:19.674659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:27:26.719 [2024-11-10 05:29:19.674687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.109 ms 00:27:26.719 [2024-11-10 05:29:19.674711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.719 [2024-11-10 05:29:19.678657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.719 [2024-11-10 05:29:19.678731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:27:26.719 [2024-11-10 05:29:19.678761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.865 ms 00:27:26.719 [2024-11-10 05:29:19.678785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.719 [2024-11-10 05:29:19.678983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.719 [2024-11-10 05:29:19.679044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:27:26.719 [2024-11-10 05:29:19.679085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.116 ms 00:27:26.719 [2024-11-10 05:29:19.679110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.719 [2024-11-10 05:29:19.682348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.719 [2024-11-10 05:29:19.682443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:27:26.719 [2024-11-10 05:29:19.682479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.184 ms 00:27:26.720 [2024-11-10 05:29:19.682504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.720 [2024-11-10 05:29:19.685171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.720 [2024-11-10 05:29:19.685235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:27:26.720 [2024-11-10 05:29:19.685263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.540 ms 00:27:26.720 [2024-11-10 05:29:19.685286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.720 [2024-11-10 05:29:19.687873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.720 [2024-11-10 05:29:19.687937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:27:26.720 [2024-11-10 05:29:19.687985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.504 ms 00:27:26.720 [2024-11-10 05:29:19.688041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.720 [2024-11-10 05:29:19.690567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.720 [2024-11-10 05:29:19.690642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:27:26.720 [2024-11-10 05:29:19.690673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.348 ms 00:27:26.720 [2024-11-10 05:29:19.690699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.720 [2024-11-10 05:29:19.690776] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:27:26.720 [2024-11-10 05:29:19.690817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:26.720 [2024-11-10 05:29:19.690849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:27:26.720 [2024-11-10 05:29:19.690878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:27:26.720 [2024-11-10 05:29:19.690905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:26.720 [2024-11-10 05:29:19.690932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:26.720 [2024-11-10 05:29:19.690958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:26.720 [2024-11-10 05:29:19.690985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:26.720 [2024-11-10 05:29:19.691042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:26.720 [2024-11-10 05:29:19.691069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:26.720 [2024-11-10 05:29:19.691096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:26.720 [2024-11-10 05:29:19.691122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:26.720 [2024-11-10 05:29:19.691149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:26.720 [2024-11-10 05:29:19.691175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:26.720 [2024-11-10 05:29:19.691202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:26.720 [2024-11-10 05:29:19.691228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:26.720 [2024-11-10 05:29:19.691254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:26.720 [2024-11-10 05:29:19.691281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:26.720 [2024-11-10 05:29:19.691307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:26.720 [2024-11-10 05:29:19.691338] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:27:26.720 [2024-11-10 05:29:19.691364] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 786d8882-3dd7-4f51-b235-b2b44e9b80fd 00:27:26.720 [2024-11-10 05:29:19.691410] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:27:26.720 [2024-11-10 05:29:19.691435] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:27:26.720 [2024-11-10 05:29:19.691459] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:27:26.720 [2024-11-10 05:29:19.691485] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:27:26.720 [2024-11-10 05:29:19.691509] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:27:26.720 [2024-11-10 05:29:19.691543] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:27:26.720 [2024-11-10 05:29:19.691568] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:27:26.720 [2024-11-10 05:29:19.691591] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:27:26.720 [2024-11-10 05:29:19.691614] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:27:26.720 [2024-11-10 05:29:19.691642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.720 [2024-11-10 05:29:19.691667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:27:26.720 [2024-11-10 05:29:19.691694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.866 ms 00:27:26.720 [2024-11-10 05:29:19.691719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.720 [2024-11-10 05:29:19.693620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.720 [2024-11-10 05:29:19.693646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:27:26.720 [2024-11-10 05:29:19.693656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.857 ms 00:27:26.720 [2024-11-10 05:29:19.693669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.720 [2024-11-10 05:29:19.693759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.720 [2024-11-10 05:29:19.693769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:27:26.720 [2024-11-10 05:29:19.693778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.059 ms 00:27:26.720 [2024-11-10 05:29:19.693787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.720 [2024-11-10 05:29:19.699020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:26.720 [2024-11-10 05:29:19.699047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:26.720 [2024-11-10 05:29:19.699062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:26.720 [2024-11-10 05:29:19.699070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.720 [2024-11-10 05:29:19.699095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:26.720 [2024-11-10 05:29:19.699103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:26.720 [2024-11-10 05:29:19.699112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:26.720 [2024-11-10 05:29:19.699120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.720 [2024-11-10 05:29:19.699162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:26.720 [2024-11-10 05:29:19.699172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:26.720 [2024-11-10 05:29:19.699182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:26.720 [2024-11-10 05:29:19.699193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.720 [2024-11-10 05:29:19.699209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:26.720 [2024-11-10 05:29:19.699218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:26.720 [2024-11-10 05:29:19.699226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:26.720 [2024-11-10 05:29:19.699235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.720 [2024-11-10 05:29:19.708222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:26.720 [2024-11-10 05:29:19.708256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:26.720 [2024-11-10 05:29:19.708269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:26.720 [2024-11-10 05:29:19.708278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.720 [2024-11-10 05:29:19.715788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:26.720 [2024-11-10 05:29:19.715813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:26.720 [2024-11-10 05:29:19.715824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:26.720 [2024-11-10 05:29:19.715833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.720 [2024-11-10 05:29:19.715890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:26.720 [2024-11-10 05:29:19.715900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:26.720 [2024-11-10 05:29:19.715909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:26.720 [2024-11-10 05:29:19.715917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.720 [2024-11-10 05:29:19.715973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:26.720 [2024-11-10 05:29:19.715985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:26.720 [2024-11-10 05:29:19.716012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:26.720 [2024-11-10 05:29:19.716020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.720 [2024-11-10 05:29:19.716082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:26.720 [2024-11-10 05:29:19.716093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:26.720 [2024-11-10 05:29:19.716103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:26.720 [2024-11-10 05:29:19.716111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.720 [2024-11-10 05:29:19.716139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:26.720 [2024-11-10 05:29:19.716153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:27:26.720 [2024-11-10 05:29:19.716161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:26.720 [2024-11-10 05:29:19.716170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.720 [2024-11-10 05:29:19.716205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:26.720 [2024-11-10 05:29:19.716216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:26.720 [2024-11-10 05:29:19.716225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:26.720 [2024-11-10 05:29:19.716233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.721 [2024-11-10 05:29:19.716284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:26.721 [2024-11-10 05:29:19.716295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:26.721 [2024-11-10 05:29:19.716304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:26.721 [2024-11-10 05:29:19.716314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.721 [2024-11-10 05:29:19.716430] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8129.069 ms, result 0 00:27:30.928 05:29:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:27:30.928 05:29:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:27:30.928 05:29:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:27:30.928 05:29:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:27:30.928 05:29:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:30.928 05:29:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=92405 00:27:30.928 05:29:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:30.928 05:29:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 92405 00:27:30.928 05:29:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 92405 ']' 00:27:30.928 05:29:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:30.928 05:29:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:30.928 05:29:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:30.928 05:29:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:30.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:30.928 05:29:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:30.928 05:29:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:30.928 [2024-11-10 05:29:23.558284] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:27:30.928 [2024-11-10 05:29:23.558405] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92405 ] 00:27:30.928 [2024-11-10 05:29:23.707928] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:30.928 [2024-11-10 05:29:23.742435] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:27:30.928 [2024-11-10 05:29:24.014042] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:30.928 [2024-11-10 05:29:24.014101] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:30.928 [2024-11-10 05:29:24.161532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.928 [2024-11-10 05:29:24.161572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:30.928 [2024-11-10 05:29:24.161587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:30.928 [2024-11-10 05:29:24.161598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.929 [2024-11-10 05:29:24.161654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.929 [2024-11-10 05:29:24.161665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:30.929 [2024-11-10 05:29:24.161673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:27:30.929 [2024-11-10 05:29:24.161681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.929 [2024-11-10 05:29:24.161704] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:31.191 [2024-11-10 05:29:24.161967] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:31.191 [2024-11-10 05:29:24.162017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:31.191 [2024-11-10 05:29:24.162030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:31.191 [2024-11-10 05:29:24.162040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.320 ms 00:27:31.191 [2024-11-10 05:29:24.162048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.191 [2024-11-10 05:29:24.163338] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:27:31.191 [2024-11-10 05:29:24.166204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:31.191 [2024-11-10 05:29:24.166235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:27:31.191 [2024-11-10 05:29:24.166245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.867 ms 00:27:31.191 [2024-11-10 05:29:24.166260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.191 [2024-11-10 05:29:24.166319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:31.191 [2024-11-10 05:29:24.166329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:27:31.191 [2024-11-10 05:29:24.166341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:27:31.191 [2024-11-10 05:29:24.166348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.191 [2024-11-10 05:29:24.171886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:31.191 [2024-11-10 05:29:24.171912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:31.191 [2024-11-10 05:29:24.171925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.475 ms 00:27:31.191 [2024-11-10 05:29:24.171932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.191 [2024-11-10 05:29:24.172004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:31.191 [2024-11-10 05:29:24.172014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:31.191 [2024-11-10 05:29:24.172022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:27:31.191 [2024-11-10 05:29:24.172030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.191 [2024-11-10 05:29:24.172071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:31.191 [2024-11-10 05:29:24.172081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:31.191 [2024-11-10 05:29:24.172093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:27:31.191 [2024-11-10 05:29:24.172102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.191 [2024-11-10 05:29:24.172124] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:31.191 [2024-11-10 05:29:24.173554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:31.191 [2024-11-10 05:29:24.173579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:31.191 [2024-11-10 05:29:24.173588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.436 ms 00:27:31.191 [2024-11-10 05:29:24.173595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.191 [2024-11-10 05:29:24.173623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:31.191 [2024-11-10 05:29:24.173632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:31.191 [2024-11-10 05:29:24.173640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:31.191 [2024-11-10 05:29:24.173654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.191 [2024-11-10 05:29:24.173673] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:27:31.191 [2024-11-10 05:29:24.173692] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:27:31.191 [2024-11-10 05:29:24.173726] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:27:31.191 [2024-11-10 05:29:24.173742] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:27:31.191 [2024-11-10 05:29:24.173844] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:27:31.191 [2024-11-10 05:29:24.173855] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:31.191 [2024-11-10 05:29:24.173868] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:27:31.191 [2024-11-10 05:29:24.173881] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:31.191 [2024-11-10 05:29:24.173890] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:31.191 [2024-11-10 05:29:24.173902] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:31.191 [2024-11-10 05:29:24.173912] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:31.191 [2024-11-10 05:29:24.173920] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:27:31.191 [2024-11-10 05:29:24.173927] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:27:31.191 [2024-11-10 05:29:24.173937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:31.192 [2024-11-10 05:29:24.173944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:31.192 [2024-11-10 05:29:24.173951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.265 ms 00:27:31.192 [2024-11-10 05:29:24.173962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.192 [2024-11-10 05:29:24.174075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:31.192 [2024-11-10 05:29:24.174088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:31.192 [2024-11-10 05:29:24.174096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.094 ms 00:27:31.192 [2024-11-10 05:29:24.174103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.192 [2024-11-10 05:29:24.174207] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:31.192 [2024-11-10 05:29:24.174221] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:31.192 [2024-11-10 05:29:24.174230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:31.192 [2024-11-10 05:29:24.174240] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:31.192 [2024-11-10 05:29:24.174249] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:31.192 [2024-11-10 05:29:24.174257] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:31.192 [2024-11-10 05:29:24.174265] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:31.192 [2024-11-10 05:29:24.174274] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:31.192 [2024-11-10 05:29:24.174283] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:31.192 [2024-11-10 05:29:24.174291] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:31.192 [2024-11-10 05:29:24.174299] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:31.192 [2024-11-10 05:29:24.174307] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:31.192 [2024-11-10 05:29:24.174315] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:31.192 [2024-11-10 05:29:24.174323] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:31.192 [2024-11-10 05:29:24.174336] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:27:31.192 [2024-11-10 05:29:24.174344] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:31.192 [2024-11-10 05:29:24.174352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:31.192 [2024-11-10 05:29:24.174366] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:27:31.192 [2024-11-10 05:29:24.174374] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:31.192 [2024-11-10 05:29:24.174382] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:31.192 [2024-11-10 05:29:24.174393] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:31.192 [2024-11-10 05:29:24.174401] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:31.192 [2024-11-10 05:29:24.174409] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:31.192 [2024-11-10 05:29:24.174417] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:31.192 [2024-11-10 05:29:24.174424] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:31.192 [2024-11-10 05:29:24.174432] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:31.192 [2024-11-10 05:29:24.174440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:31.192 [2024-11-10 05:29:24.174447] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:31.192 [2024-11-10 05:29:24.174455] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:31.192 [2024-11-10 05:29:24.174463] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:27:31.192 [2024-11-10 05:29:24.174471] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:31.192 [2024-11-10 05:29:24.174478] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:31.192 [2024-11-10 05:29:24.174487] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:27:31.192 [2024-11-10 05:29:24.174497] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:31.192 [2024-11-10 05:29:24.174504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:31.192 [2024-11-10 05:29:24.174512] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:27:31.192 [2024-11-10 05:29:24.174520] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:31.192 [2024-11-10 05:29:24.174528] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:27:31.192 [2024-11-10 05:29:24.174536] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:27:31.192 [2024-11-10 05:29:24.174543] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:31.192 [2024-11-10 05:29:24.174552] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:27:31.192 [2024-11-10 05:29:24.174561] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:27:31.192 [2024-11-10 05:29:24.174568] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:31.192 [2024-11-10 05:29:24.174575] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:31.192 [2024-11-10 05:29:24.174586] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:31.192 [2024-11-10 05:29:24.174593] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:31.192 [2024-11-10 05:29:24.174601] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:31.192 [2024-11-10 05:29:24.174611] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:31.192 [2024-11-10 05:29:24.174618] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:31.192 [2024-11-10 05:29:24.174627] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:31.192 [2024-11-10 05:29:24.174634] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:31.192 [2024-11-10 05:29:24.174640] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:31.192 [2024-11-10 05:29:24.174648] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:31.192 [2024-11-10 05:29:24.174656] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:31.192 [2024-11-10 05:29:24.174666] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:31.192 [2024-11-10 05:29:24.174674] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:31.192 [2024-11-10 05:29:24.174681] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:27:31.192 [2024-11-10 05:29:24.174689] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:27:31.192 [2024-11-10 05:29:24.174697] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:27:31.192 [2024-11-10 05:29:24.174704] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:27:31.192 [2024-11-10 05:29:24.174711] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:27:31.192 [2024-11-10 05:29:24.174718] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:27:31.192 [2024-11-10 05:29:24.174725] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:27:31.192 [2024-11-10 05:29:24.174732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:27:31.192 [2024-11-10 05:29:24.174739] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:27:31.192 [2024-11-10 05:29:24.174748] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:27:31.192 [2024-11-10 05:29:24.174756] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:27:31.192 [2024-11-10 05:29:24.174762] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:27:31.192 [2024-11-10 05:29:24.174769] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:27:31.192 [2024-11-10 05:29:24.174776] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:31.192 [2024-11-10 05:29:24.174784] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:31.192 [2024-11-10 05:29:24.174792] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:31.192 [2024-11-10 05:29:24.174800] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:31.192 [2024-11-10 05:29:24.174807] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:31.192 [2024-11-10 05:29:24.174814] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:31.192 [2024-11-10 05:29:24.174822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:31.192 [2024-11-10 05:29:24.174829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:31.192 [2024-11-10 05:29:24.174836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.683 ms 00:27:31.192 [2024-11-10 05:29:24.174845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.192 [2024-11-10 05:29:24.174894] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:27:31.192 [2024-11-10 05:29:24.174904] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:27:34.497 [2024-11-10 05:29:27.599036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:34.497 [2024-11-10 05:29:27.599091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:27:34.497 [2024-11-10 05:29:27.599105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3424.128 ms 00:27:34.497 [2024-11-10 05:29:27.599113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.497 [2024-11-10 05:29:27.609234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:34.497 [2024-11-10 05:29:27.609275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:34.497 [2024-11-10 05:29:27.609286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.020 ms 00:27:34.497 [2024-11-10 05:29:27.609294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.497 [2024-11-10 05:29:27.609347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:34.497 [2024-11-10 05:29:27.609363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:34.497 [2024-11-10 05:29:27.609370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:27:34.497 [2024-11-10 05:29:27.609377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.497 [2024-11-10 05:29:27.627457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:34.497 [2024-11-10 05:29:27.627496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:34.497 [2024-11-10 05:29:27.627507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.039 ms 00:27:34.497 [2024-11-10 05:29:27.627514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.497 [2024-11-10 05:29:27.627551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:34.497 [2024-11-10 05:29:27.627561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:34.497 [2024-11-10 05:29:27.627568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:34.497 [2024-11-10 05:29:27.627575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.497 [2024-11-10 05:29:27.628011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:34.497 [2024-11-10 05:29:27.628029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:34.497 [2024-11-10 05:29:27.628039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.386 ms 00:27:34.497 [2024-11-10 05:29:27.628046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.497 [2024-11-10 05:29:27.628086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:34.497 [2024-11-10 05:29:27.628099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:34.497 [2024-11-10 05:29:27.628107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:27:34.497 [2024-11-10 05:29:27.628117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.497 [2024-11-10 05:29:27.634149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:34.497 [2024-11-10 05:29:27.634189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:34.497 [2024-11-10 05:29:27.634203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.010 ms 00:27:34.497 [2024-11-10 05:29:27.634214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.497 [2024-11-10 05:29:27.637363] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:27:34.497 [2024-11-10 05:29:27.637404] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:27:34.497 [2024-11-10 05:29:27.637419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:34.497 [2024-11-10 05:29:27.637430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:27:34.497 [2024-11-10 05:29:27.637441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.100 ms 00:27:34.497 [2024-11-10 05:29:27.637451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.497 [2024-11-10 05:29:27.642769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:34.497 [2024-11-10 05:29:27.642806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:27:34.497 [2024-11-10 05:29:27.642826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.265 ms 00:27:34.497 [2024-11-10 05:29:27.642837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.497 [2024-11-10 05:29:27.644760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:34.497 [2024-11-10 05:29:27.644786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:27:34.497 [2024-11-10 05:29:27.644793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.870 ms 00:27:34.497 [2024-11-10 05:29:27.644799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.497 [2024-11-10 05:29:27.646567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:34.497 [2024-11-10 05:29:27.646594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:27:34.497 [2024-11-10 05:29:27.646601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.736 ms 00:27:34.497 [2024-11-10 05:29:27.646606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.497 [2024-11-10 05:29:27.646858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:34.497 [2024-11-10 05:29:27.646874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:34.497 [2024-11-10 05:29:27.646883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.194 ms 00:27:34.497 [2024-11-10 05:29:27.646889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.497 [2024-11-10 05:29:27.663178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:34.497 [2024-11-10 05:29:27.663213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:27:34.497 [2024-11-10 05:29:27.663227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.274 ms 00:27:34.497 [2024-11-10 05:29:27.663233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.497 [2024-11-10 05:29:27.669075] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:34.497 [2024-11-10 05:29:27.669707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:34.497 [2024-11-10 05:29:27.669728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:34.497 [2024-11-10 05:29:27.669736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.435 ms 00:27:34.497 [2024-11-10 05:29:27.669747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.497 [2024-11-10 05:29:27.669801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:34.497 [2024-11-10 05:29:27.669809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:27:34.497 [2024-11-10 05:29:27.669817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:27:34.497 [2024-11-10 05:29:27.669823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.497 [2024-11-10 05:29:27.669855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:34.497 [2024-11-10 05:29:27.669866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:34.497 [2024-11-10 05:29:27.669872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:27:34.497 [2024-11-10 05:29:27.669879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.497 [2024-11-10 05:29:27.669898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:34.497 [2024-11-10 05:29:27.669904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:34.497 [2024-11-10 05:29:27.669910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:34.497 [2024-11-10 05:29:27.669916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.497 [2024-11-10 05:29:27.669940] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:27:34.497 [2024-11-10 05:29:27.669951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:34.497 [2024-11-10 05:29:27.669956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:27:34.497 [2024-11-10 05:29:27.669965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:27:34.497 [2024-11-10 05:29:27.669970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.497 [2024-11-10 05:29:27.673438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:34.497 [2024-11-10 05:29:27.673469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:27:34.497 [2024-11-10 05:29:27.673477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.453 ms 00:27:34.497 [2024-11-10 05:29:27.673486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.497 [2024-11-10 05:29:27.673548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:34.497 [2024-11-10 05:29:27.673556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:34.497 [2024-11-10 05:29:27.673563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:27:34.497 [2024-11-10 05:29:27.673569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.497 [2024-11-10 05:29:27.674373] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3512.524 ms, result 0 00:27:34.497 [2024-11-10 05:29:27.690160] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:34.497 [2024-11-10 05:29:27.706155] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:27:34.497 [2024-11-10 05:29:27.714232] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:34.758 05:29:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:34.758 05:29:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:27:34.758 05:29:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:34.758 05:29:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:27:34.758 05:29:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:35.020 [2024-11-10 05:29:27.994331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:35.020 [2024-11-10 05:29:27.994362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:35.020 [2024-11-10 05:29:27.994372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:35.020 [2024-11-10 05:29:27.994378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:35.020 [2024-11-10 05:29:27.994395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:35.020 [2024-11-10 05:29:27.994404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:35.020 [2024-11-10 05:29:27.994413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:35.020 [2024-11-10 05:29:27.994419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:35.020 [2024-11-10 05:29:27.994435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:35.020 [2024-11-10 05:29:27.994441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:35.020 [2024-11-10 05:29:27.994447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:35.020 [2024-11-10 05:29:27.994453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:35.020 [2024-11-10 05:29:27.994493] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.157 ms, result 0 00:27:35.020 true 00:27:35.020 05:29:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:35.020 { 00:27:35.020 "name": "ftl", 00:27:35.020 "properties": [ 00:27:35.020 { 00:27:35.020 "name": "superblock_version", 00:27:35.020 "value": 5, 00:27:35.020 "read-only": true 00:27:35.020 }, 00:27:35.020 { 00:27:35.020 "name": "base_device", 00:27:35.021 "bands": [ 00:27:35.021 { 00:27:35.021 "id": 0, 00:27:35.021 "state": "CLOSED", 00:27:35.021 "validity": 1.0 00:27:35.021 }, 00:27:35.021 { 00:27:35.021 "id": 1, 00:27:35.021 "state": "CLOSED", 00:27:35.021 "validity": 1.0 00:27:35.021 }, 00:27:35.021 { 00:27:35.021 "id": 2, 00:27:35.021 "state": "CLOSED", 00:27:35.021 "validity": 0.007843137254901933 00:27:35.021 }, 00:27:35.021 { 00:27:35.021 "id": 3, 00:27:35.021 "state": "FREE", 00:27:35.021 "validity": 0.0 00:27:35.021 }, 00:27:35.021 { 00:27:35.021 "id": 4, 00:27:35.021 "state": "FREE", 00:27:35.021 "validity": 0.0 00:27:35.021 }, 00:27:35.021 { 00:27:35.021 "id": 5, 00:27:35.021 "state": "FREE", 00:27:35.021 "validity": 0.0 00:27:35.021 }, 00:27:35.021 { 00:27:35.021 "id": 6, 00:27:35.021 "state": "FREE", 00:27:35.021 "validity": 0.0 00:27:35.021 }, 00:27:35.021 { 00:27:35.021 "id": 7, 00:27:35.021 "state": "FREE", 00:27:35.021 "validity": 0.0 00:27:35.021 }, 00:27:35.021 { 00:27:35.021 "id": 8, 00:27:35.021 "state": "FREE", 00:27:35.021 "validity": 0.0 00:27:35.021 }, 00:27:35.021 { 00:27:35.021 "id": 9, 00:27:35.021 "state": "FREE", 00:27:35.021 "validity": 0.0 00:27:35.021 }, 00:27:35.021 { 00:27:35.021 "id": 10, 00:27:35.021 "state": "FREE", 00:27:35.021 "validity": 0.0 00:27:35.021 }, 00:27:35.021 { 00:27:35.021 "id": 11, 00:27:35.021 "state": "FREE", 00:27:35.021 "validity": 0.0 00:27:35.021 }, 00:27:35.021 { 00:27:35.021 "id": 12, 00:27:35.021 "state": "FREE", 00:27:35.021 "validity": 0.0 00:27:35.021 }, 00:27:35.021 { 00:27:35.021 "id": 13, 00:27:35.021 "state": "FREE", 00:27:35.021 "validity": 0.0 00:27:35.021 }, 00:27:35.021 { 00:27:35.021 "id": 14, 00:27:35.021 "state": "FREE", 00:27:35.021 "validity": 0.0 00:27:35.021 }, 00:27:35.021 { 00:27:35.021 "id": 15, 00:27:35.021 "state": "FREE", 00:27:35.021 "validity": 0.0 00:27:35.021 }, 00:27:35.021 { 00:27:35.021 "id": 16, 00:27:35.021 "state": "FREE", 00:27:35.021 "validity": 0.0 00:27:35.021 }, 00:27:35.021 { 00:27:35.021 "id": 17, 00:27:35.021 "state": "FREE", 00:27:35.021 "validity": 0.0 00:27:35.021 } 00:27:35.021 ], 00:27:35.021 "read-only": true 00:27:35.021 }, 00:27:35.021 { 00:27:35.021 "name": "cache_device", 00:27:35.021 "type": "bdev", 00:27:35.021 "chunks": [ 00:27:35.021 { 00:27:35.021 "id": 0, 00:27:35.021 "state": "INACTIVE", 00:27:35.021 "utilization": 0.0 00:27:35.021 }, 00:27:35.021 { 00:27:35.021 "id": 1, 00:27:35.021 "state": "OPEN", 00:27:35.021 "utilization": 0.0 00:27:35.021 }, 00:27:35.021 { 00:27:35.021 "id": 2, 00:27:35.021 "state": "OPEN", 00:27:35.021 "utilization": 0.0 00:27:35.021 }, 00:27:35.021 { 00:27:35.021 "id": 3, 00:27:35.021 "state": "FREE", 00:27:35.021 "utilization": 0.0 00:27:35.021 }, 00:27:35.021 { 00:27:35.021 "id": 4, 00:27:35.021 "state": "FREE", 00:27:35.021 "utilization": 0.0 00:27:35.021 } 00:27:35.021 ], 00:27:35.021 "read-only": true 00:27:35.021 }, 00:27:35.021 { 00:27:35.021 "name": "verbose_mode", 00:27:35.021 "value": true, 00:27:35.021 "unit": "", 00:27:35.021 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:35.021 }, 00:27:35.021 { 00:27:35.021 "name": "prep_upgrade_on_shutdown", 00:27:35.021 "value": false, 00:27:35.021 "unit": "", 00:27:35.021 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:35.021 } 00:27:35.021 ] 00:27:35.021 } 00:27:35.021 05:29:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:27:35.021 05:29:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:27:35.021 05:29:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:35.285 05:29:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:27:35.285 05:29:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:27:35.285 05:29:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:27:35.285 05:29:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:35.285 05:29:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:27:35.547 05:29:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:27:35.547 Validate MD5 checksum, iteration 1 00:27:35.547 05:29:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:27:35.547 05:29:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:27:35.547 05:29:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:27:35.547 05:29:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:27:35.547 05:29:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:35.547 05:29:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:27:35.547 05:29:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:35.547 05:29:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:35.547 05:29:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:35.547 05:29:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:35.547 05:29:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:35.547 05:29:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:35.547 [2024-11-10 05:29:28.704952] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:27:35.547 [2024-11-10 05:29:28.705224] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92475 ] 00:27:35.809 [2024-11-10 05:29:28.856047] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:35.809 [2024-11-10 05:29:28.887748] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:37.198  [2024-11-10T05:29:31.374Z] Copying: 600/1024 [MB] (600 MBps) [2024-11-10T05:29:32.005Z] Copying: 1024/1024 [MB] (average 571 MBps) 00:27:38.769 00:27:38.769 05:29:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:27:38.769 05:29:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:40.680 05:29:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:40.680 Validate MD5 checksum, iteration 2 00:27:40.680 05:29:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=3846eae18d6f1e10bc1830efa31eef5b 00:27:40.680 05:29:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 3846eae18d6f1e10bc1830efa31eef5b != \3\8\4\6\e\a\e\1\8\d\6\f\1\e\1\0\b\c\1\8\3\0\e\f\a\3\1\e\e\f\5\b ]] 00:27:40.680 05:29:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:40.680 05:29:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:40.680 05:29:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:27:40.680 05:29:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:40.680 05:29:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:40.680 05:29:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:40.680 05:29:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:40.680 05:29:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:40.680 05:29:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:40.940 [2024-11-10 05:29:33.916922] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:27:40.940 [2024-11-10 05:29:33.917049] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92536 ] 00:27:40.940 [2024-11-10 05:29:34.065316] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:40.940 [2024-11-10 05:29:34.107407] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:42.326  [2024-11-10T05:29:36.497Z] Copying: 542/1024 [MB] (542 MBps) [2024-11-10T05:29:37.066Z] Copying: 1024/1024 [MB] (average 562 MBps) 00:27:43.830 00:27:43.830 05:29:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:27:43.830 05:29:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:45.744 05:29:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:46.006 05:29:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=a0f5e7b9c2e418aa85757fa04160fd0f 00:27:46.006 05:29:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ a0f5e7b9c2e418aa85757fa04160fd0f != \a\0\f\5\e\7\b\9\c\2\e\4\1\8\a\a\8\5\7\5\7\f\a\0\4\1\6\0\f\d\0\f ]] 00:27:46.006 05:29:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:46.006 05:29:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:46.006 05:29:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:27:46.006 05:29:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 92405 ]] 00:27:46.006 05:29:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 92405 00:27:46.006 05:29:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:27:46.006 05:29:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:27:46.006 05:29:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:27:46.006 05:29:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:27:46.006 05:29:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:46.006 05:29:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=92596 00:27:46.006 05:29:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:46.006 05:29:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 92596 00:27:46.006 05:29:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 92596 ']' 00:27:46.006 05:29:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:46.006 05:29:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:46.006 05:29:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:46.006 05:29:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:46.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:46.006 05:29:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:46.006 05:29:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:46.006 [2024-11-10 05:29:39.065204] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:27:46.006 [2024-11-10 05:29:39.065429] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92596 ] 00:27:46.006 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 830: 92405 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:27:46.006 [2024-11-10 05:29:39.216161] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:46.267 [2024-11-10 05:29:39.290288] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:27:46.528 [2024-11-10 05:29:39.718644] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:46.528 [2024-11-10 05:29:39.718737] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:46.791 [2024-11-10 05:29:39.872181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.791 [2024-11-10 05:29:39.872242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:46.791 [2024-11-10 05:29:39.872263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:46.791 [2024-11-10 05:29:39.872272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.791 [2024-11-10 05:29:39.872351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.791 [2024-11-10 05:29:39.872363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:46.791 [2024-11-10 05:29:39.872373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:27:46.791 [2024-11-10 05:29:39.872383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.791 [2024-11-10 05:29:39.872416] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:46.791 [2024-11-10 05:29:39.872732] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:46.791 [2024-11-10 05:29:39.872761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.791 [2024-11-10 05:29:39.872772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:46.791 [2024-11-10 05:29:39.872785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.356 ms 00:27:46.791 [2024-11-10 05:29:39.872794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.791 [2024-11-10 05:29:39.873141] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:27:46.791 [2024-11-10 05:29:39.880760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.791 [2024-11-10 05:29:39.880812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:27:46.791 [2024-11-10 05:29:39.880825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.618 ms 00:27:46.791 [2024-11-10 05:29:39.880848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.791 [2024-11-10 05:29:39.882691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.791 [2024-11-10 05:29:39.882734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:27:46.791 [2024-11-10 05:29:39.882746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:27:46.791 [2024-11-10 05:29:39.882755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.791 [2024-11-10 05:29:39.883131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.791 [2024-11-10 05:29:39.883146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:46.791 [2024-11-10 05:29:39.883160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.281 ms 00:27:46.791 [2024-11-10 05:29:39.883170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.791 [2024-11-10 05:29:39.883214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.791 [2024-11-10 05:29:39.883225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:46.791 [2024-11-10 05:29:39.883234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:27:46.791 [2024-11-10 05:29:39.883242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.791 [2024-11-10 05:29:39.883273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.791 [2024-11-10 05:29:39.883285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:46.791 [2024-11-10 05:29:39.883294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:27:46.791 [2024-11-10 05:29:39.883309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.791 [2024-11-10 05:29:39.883334] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:46.791 [2024-11-10 05:29:39.884728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.791 [2024-11-10 05:29:39.884767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:46.791 [2024-11-10 05:29:39.884779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.400 ms 00:27:46.791 [2024-11-10 05:29:39.884789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.791 [2024-11-10 05:29:39.884831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.791 [2024-11-10 05:29:39.884840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:46.791 [2024-11-10 05:29:39.884850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:27:46.791 [2024-11-10 05:29:39.884865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.791 [2024-11-10 05:29:39.884888] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:27:46.791 [2024-11-10 05:29:39.884914] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:27:46.791 [2024-11-10 05:29:39.884959] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:27:46.791 [2024-11-10 05:29:39.884976] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:27:46.791 [2024-11-10 05:29:39.885111] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:27:46.791 [2024-11-10 05:29:39.885133] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:46.791 [2024-11-10 05:29:39.885149] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:27:46.791 [2024-11-10 05:29:39.885162] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:46.791 [2024-11-10 05:29:39.885171] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:46.791 [2024-11-10 05:29:39.885180] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:46.791 [2024-11-10 05:29:39.885188] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:46.791 [2024-11-10 05:29:39.885201] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:27:46.791 [2024-11-10 05:29:39.885209] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:27:46.791 [2024-11-10 05:29:39.885219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.791 [2024-11-10 05:29:39.885227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:46.791 [2024-11-10 05:29:39.885236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.335 ms 00:27:46.791 [2024-11-10 05:29:39.885244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.791 [2024-11-10 05:29:39.885336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.791 [2024-11-10 05:29:39.885346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:46.791 [2024-11-10 05:29:39.885355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:27:46.791 [2024-11-10 05:29:39.885365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.791 [2024-11-10 05:29:39.885472] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:46.791 [2024-11-10 05:29:39.885484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:46.791 [2024-11-10 05:29:39.885497] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:46.791 [2024-11-10 05:29:39.885509] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:46.791 [2024-11-10 05:29:39.885518] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:46.791 [2024-11-10 05:29:39.885527] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:46.791 [2024-11-10 05:29:39.885537] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:46.791 [2024-11-10 05:29:39.885546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:46.792 [2024-11-10 05:29:39.885554] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:46.792 [2024-11-10 05:29:39.885562] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:46.792 [2024-11-10 05:29:39.885571] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:46.792 [2024-11-10 05:29:39.885581] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:46.792 [2024-11-10 05:29:39.885596] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:46.792 [2024-11-10 05:29:39.885607] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:46.792 [2024-11-10 05:29:39.885615] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:27:46.792 [2024-11-10 05:29:39.885630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:46.792 [2024-11-10 05:29:39.885639] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:46.792 [2024-11-10 05:29:39.885647] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:27:46.792 [2024-11-10 05:29:39.885655] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:46.792 [2024-11-10 05:29:39.885665] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:46.792 [2024-11-10 05:29:39.885673] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:46.792 [2024-11-10 05:29:39.885681] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:46.792 [2024-11-10 05:29:39.885689] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:46.792 [2024-11-10 05:29:39.885697] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:46.792 [2024-11-10 05:29:39.885704] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:46.792 [2024-11-10 05:29:39.885712] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:46.792 [2024-11-10 05:29:39.885721] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:46.792 [2024-11-10 05:29:39.885728] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:46.792 [2024-11-10 05:29:39.885736] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:46.792 [2024-11-10 05:29:39.885744] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:27:46.792 [2024-11-10 05:29:39.885753] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:46.792 [2024-11-10 05:29:39.885763] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:46.792 [2024-11-10 05:29:39.885771] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:27:46.792 [2024-11-10 05:29:39.885778] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:46.792 [2024-11-10 05:29:39.885785] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:46.792 [2024-11-10 05:29:39.885792] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:27:46.792 [2024-11-10 05:29:39.885802] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:46.792 [2024-11-10 05:29:39.885810] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:27:46.792 [2024-11-10 05:29:39.885816] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:27:46.792 [2024-11-10 05:29:39.885823] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:46.792 [2024-11-10 05:29:39.885830] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:27:46.792 [2024-11-10 05:29:39.885837] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:27:46.792 [2024-11-10 05:29:39.885844] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:46.792 [2024-11-10 05:29:39.885851] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:46.792 [2024-11-10 05:29:39.885862] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:46.792 [2024-11-10 05:29:39.885871] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:46.792 [2024-11-10 05:29:39.885880] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:46.792 [2024-11-10 05:29:39.885892] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:46.792 [2024-11-10 05:29:39.885900] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:46.792 [2024-11-10 05:29:39.885907] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:46.792 [2024-11-10 05:29:39.885914] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:46.792 [2024-11-10 05:29:39.885921] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:46.792 [2024-11-10 05:29:39.885928] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:46.792 [2024-11-10 05:29:39.885936] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:46.792 [2024-11-10 05:29:39.885945] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:46.792 [2024-11-10 05:29:39.885957] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:46.792 [2024-11-10 05:29:39.885964] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:27:46.792 [2024-11-10 05:29:39.885971] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:27:46.792 [2024-11-10 05:29:39.885978] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:27:46.792 [2024-11-10 05:29:39.885985] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:27:46.792 [2024-11-10 05:29:39.886009] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:27:46.792 [2024-11-10 05:29:39.886016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:27:46.792 [2024-11-10 05:29:39.886023] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:27:46.792 [2024-11-10 05:29:39.886038] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:27:46.792 [2024-11-10 05:29:39.886048] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:27:46.792 [2024-11-10 05:29:39.886056] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:27:46.792 [2024-11-10 05:29:39.886063] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:27:46.792 [2024-11-10 05:29:39.886070] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:27:46.792 [2024-11-10 05:29:39.886077] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:27:46.792 [2024-11-10 05:29:39.886087] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:46.792 [2024-11-10 05:29:39.886096] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:46.792 [2024-11-10 05:29:39.886104] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:46.792 [2024-11-10 05:29:39.886113] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:46.792 [2024-11-10 05:29:39.886120] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:46.792 [2024-11-10 05:29:39.886128] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:46.792 [2024-11-10 05:29:39.886135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.792 [2024-11-10 05:29:39.886143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:46.792 [2024-11-10 05:29:39.886154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.732 ms 00:27:46.792 [2024-11-10 05:29:39.886162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.792 [2024-11-10 05:29:39.902059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.792 [2024-11-10 05:29:39.902099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:46.792 [2024-11-10 05:29:39.902117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.842 ms 00:27:46.792 [2024-11-10 05:29:39.902127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.792 [2024-11-10 05:29:39.902171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.792 [2024-11-10 05:29:39.902189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:46.792 [2024-11-10 05:29:39.902202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:27:46.792 [2024-11-10 05:29:39.902211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.792 [2024-11-10 05:29:39.927896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.792 [2024-11-10 05:29:39.927970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:46.792 [2024-11-10 05:29:39.927985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.619 ms 00:27:46.792 [2024-11-10 05:29:39.928017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.792 [2024-11-10 05:29:39.928076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.792 [2024-11-10 05:29:39.928088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:46.792 [2024-11-10 05:29:39.928097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:46.792 [2024-11-10 05:29:39.928107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.792 [2024-11-10 05:29:39.928244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.792 [2024-11-10 05:29:39.928257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:46.792 [2024-11-10 05:29:39.928276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:27:46.792 [2024-11-10 05:29:39.928285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.792 [2024-11-10 05:29:39.928342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.792 [2024-11-10 05:29:39.928352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:46.792 [2024-11-10 05:29:39.928362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:27:46.792 [2024-11-10 05:29:39.928371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.792 [2024-11-10 05:29:39.940661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.792 [2024-11-10 05:29:39.940710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:46.792 [2024-11-10 05:29:39.940726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.267 ms 00:27:46.792 [2024-11-10 05:29:39.940738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.792 [2024-11-10 05:29:39.940891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.792 [2024-11-10 05:29:39.940909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:27:46.792 [2024-11-10 05:29:39.940923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:46.793 [2024-11-10 05:29:39.940935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.793 [2024-11-10 05:29:39.948914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.793 [2024-11-10 05:29:39.948964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:27:46.793 [2024-11-10 05:29:39.948977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.945 ms 00:27:46.793 [2024-11-10 05:29:39.948986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.793 [2024-11-10 05:29:39.951113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.793 [2024-11-10 05:29:39.951154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:46.793 [2024-11-10 05:29:39.951167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.299 ms 00:27:46.793 [2024-11-10 05:29:39.951177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.793 [2024-11-10 05:29:39.982261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.793 [2024-11-10 05:29:39.982559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:27:46.793 [2024-11-10 05:29:39.982587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.013 ms 00:27:46.793 [2024-11-10 05:29:39.982597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.793 [2024-11-10 05:29:39.982754] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:27:46.793 [2024-11-10 05:29:39.982899] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:27:46.793 [2024-11-10 05:29:39.983083] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:27:46.793 [2024-11-10 05:29:39.983222] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:27:46.793 [2024-11-10 05:29:39.983235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.793 [2024-11-10 05:29:39.983245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:27:46.793 [2024-11-10 05:29:39.983262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.593 ms 00:27:46.793 [2024-11-10 05:29:39.983272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.793 [2024-11-10 05:29:39.983347] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:27:46.793 [2024-11-10 05:29:39.983363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.793 [2024-11-10 05:29:39.983373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:27:46.793 [2024-11-10 05:29:39.983384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:27:46.793 [2024-11-10 05:29:39.983393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.793 [2024-11-10 05:29:39.988673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.793 [2024-11-10 05:29:39.988728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:27:46.793 [2024-11-10 05:29:39.988741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.253 ms 00:27:46.793 [2024-11-10 05:29:39.988755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.793 [2024-11-10 05:29:39.989889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.793 [2024-11-10 05:29:39.989936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:27:46.793 [2024-11-10 05:29:39.989949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:27:46.793 [2024-11-10 05:29:39.989957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.793 [2024-11-10 05:29:39.990039] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:27:46.793 [2024-11-10 05:29:39.990307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.793 [2024-11-10 05:29:39.990329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:27:46.793 [2024-11-10 05:29:39.990340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.270 ms 00:27:46.793 [2024-11-10 05:29:39.990352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.737 [2024-11-10 05:29:40.712615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.737 [2024-11-10 05:29:40.712943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:27:47.737 [2024-11-10 05:29:40.712974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 721.813 ms 00:27:47.737 [2024-11-10 05:29:40.712984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.737 [2024-11-10 05:29:40.715573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.737 [2024-11-10 05:29:40.715631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:27:47.737 [2024-11-10 05:29:40.715645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.879 ms 00:27:47.737 [2024-11-10 05:29:40.715655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.737 [2024-11-10 05:29:40.717213] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:27:47.737 [2024-11-10 05:29:40.717390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.737 [2024-11-10 05:29:40.717449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:27:47.737 [2024-11-10 05:29:40.717477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.685 ms 00:27:47.737 [2024-11-10 05:29:40.717500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.737 [2024-11-10 05:29:40.717619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.737 [2024-11-10 05:29:40.717649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:27:47.737 [2024-11-10 05:29:40.717673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:47.737 [2024-11-10 05:29:40.717702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.737 [2024-11-10 05:29:40.717836] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 727.780 ms, result 0 00:27:47.737 [2024-11-10 05:29:40.717889] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:27:47.737 [2024-11-10 05:29:40.718198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.737 [2024-11-10 05:29:40.718229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:27:47.737 [2024-11-10 05:29:40.718240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.309 ms 00:27:47.737 [2024-11-10 05:29:40.718251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.309 [2024-11-10 05:29:41.453744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:48.309 [2024-11-10 05:29:41.453814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:27:48.309 [2024-11-10 05:29:41.453830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 734.812 ms 00:27:48.309 [2024-11-10 05:29:41.453839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.309 [2024-11-10 05:29:41.455941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:48.310 [2024-11-10 05:29:41.456177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:27:48.310 [2024-11-10 05:29:41.456197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.528 ms 00:27:48.310 [2024-11-10 05:29:41.456206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.310 [2024-11-10 05:29:41.456883] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:27:48.310 [2024-11-10 05:29:41.456924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:48.310 [2024-11-10 05:29:41.456934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:27:48.310 [2024-11-10 05:29:41.456945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.678 ms 00:27:48.310 [2024-11-10 05:29:41.456954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.310 [2024-11-10 05:29:41.457106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:48.310 [2024-11-10 05:29:41.457133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:27:48.310 [2024-11-10 05:29:41.457144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:48.310 [2024-11-10 05:29:41.457152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.310 [2024-11-10 05:29:41.457199] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 739.302 ms, result 0 00:27:48.310 [2024-11-10 05:29:41.457251] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:48.310 [2024-11-10 05:29:41.457271] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:27:48.310 [2024-11-10 05:29:41.457282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:48.310 [2024-11-10 05:29:41.457292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:27:48.310 [2024-11-10 05:29:41.457302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1467.261 ms 00:27:48.310 [2024-11-10 05:29:41.457311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.310 [2024-11-10 05:29:41.457346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:48.310 [2024-11-10 05:29:41.457360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:27:48.310 [2024-11-10 05:29:41.457370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:48.310 [2024-11-10 05:29:41.457379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.310 [2024-11-10 05:29:41.467707] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:48.310 [2024-11-10 05:29:41.468028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:48.310 [2024-11-10 05:29:41.468047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:48.310 [2024-11-10 05:29:41.468059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.632 ms 00:27:48.310 [2024-11-10 05:29:41.468068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.310 [2024-11-10 05:29:41.468834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:48.310 [2024-11-10 05:29:41.468859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:27:48.310 [2024-11-10 05:29:41.468871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.660 ms 00:27:48.310 [2024-11-10 05:29:41.468878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.310 [2024-11-10 05:29:41.471120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:48.310 [2024-11-10 05:29:41.471148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:27:48.310 [2024-11-10 05:29:41.471160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.225 ms 00:27:48.310 [2024-11-10 05:29:41.471174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.310 [2024-11-10 05:29:41.471217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:48.310 [2024-11-10 05:29:41.471226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:27:48.310 [2024-11-10 05:29:41.471235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:48.310 [2024-11-10 05:29:41.471243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.310 [2024-11-10 05:29:41.471366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:48.310 [2024-11-10 05:29:41.471376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:48.310 [2024-11-10 05:29:41.471385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:27:48.310 [2024-11-10 05:29:41.471398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.310 [2024-11-10 05:29:41.471425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:48.310 [2024-11-10 05:29:41.471434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:48.310 [2024-11-10 05:29:41.471443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:48.310 [2024-11-10 05:29:41.471451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.310 [2024-11-10 05:29:41.471489] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:27:48.310 [2024-11-10 05:29:41.471503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:48.310 [2024-11-10 05:29:41.471512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:27:48.310 [2024-11-10 05:29:41.471520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:27:48.310 [2024-11-10 05:29:41.471538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.310 [2024-11-10 05:29:41.471598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:48.310 [2024-11-10 05:29:41.471611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:48.310 [2024-11-10 05:29:41.471619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:27:48.310 [2024-11-10 05:29:41.471628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.310 [2024-11-10 05:29:41.473142] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1600.385 ms, result 0 00:27:48.310 [2024-11-10 05:29:41.488746] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:48.310 [2024-11-10 05:29:41.504732] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:27:48.310 [2024-11-10 05:29:41.512934] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:48.571 Validate MD5 checksum, iteration 1 00:27:48.571 05:29:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:48.571 05:29:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:27:48.571 05:29:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:48.571 05:29:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:27:48.571 05:29:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:27:48.571 05:29:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:27:48.571 05:29:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:27:48.571 05:29:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:48.571 05:29:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:27:48.571 05:29:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:48.571 05:29:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:48.571 05:29:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:48.571 05:29:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:48.571 05:29:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:48.571 05:29:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:48.571 [2024-11-10 05:29:41.712683] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:27:48.571 [2024-11-10 05:29:41.713023] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92626 ] 00:27:48.831 [2024-11-10 05:29:41.864930] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:48.831 [2024-11-10 05:29:41.915822] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:50.219  [2024-11-10T05:29:44.393Z] Copying: 500/1024 [MB] (500 MBps) [2024-11-10T05:29:44.960Z] Copying: 1024/1024 [MB] (average 542 MBps) 00:27:51.724 00:27:51.724 05:29:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:27:51.724 05:29:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:54.254 05:29:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:54.254 05:29:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=3846eae18d6f1e10bc1830efa31eef5b 00:27:54.254 05:29:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 3846eae18d6f1e10bc1830efa31eef5b != \3\8\4\6\e\a\e\1\8\d\6\f\1\e\1\0\b\c\1\8\3\0\e\f\a\3\1\e\e\f\5\b ]] 00:27:54.254 05:29:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:54.254 05:29:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:54.254 05:29:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:27:54.254 Validate MD5 checksum, iteration 2 00:27:54.254 05:29:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:54.254 05:29:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:54.254 05:29:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:54.254 05:29:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:54.254 05:29:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:54.255 05:29:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:54.255 [2024-11-10 05:29:47.051865] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:27:54.255 [2024-11-10 05:29:47.052016] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92693 ] 00:27:54.255 [2024-11-10 05:29:47.197692] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:54.255 [2024-11-10 05:29:47.231240] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:55.635  [2024-11-10T05:29:49.457Z] Copying: 566/1024 [MB] (566 MBps) [2024-11-10T05:29:50.044Z] Copying: 1024/1024 [MB] (average 628 MBps) 00:27:56.808 00:27:56.808 05:29:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:27:56.808 05:29:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:58.710 05:29:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:58.710 05:29:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=a0f5e7b9c2e418aa85757fa04160fd0f 00:27:58.710 05:29:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ a0f5e7b9c2e418aa85757fa04160fd0f != \a\0\f\5\e\7\b\9\c\2\e\4\1\8\a\a\8\5\7\5\7\f\a\0\4\1\6\0\f\d\0\f ]] 00:27:58.710 05:29:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:58.710 05:29:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:58.710 05:29:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:27:58.710 05:29:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:27:58.710 05:29:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:27:58.710 05:29:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:58.971 05:29:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:27:58.971 05:29:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:27:58.971 05:29:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:27:58.971 05:29:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:27:58.971 05:29:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 92596 ]] 00:27:58.971 05:29:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 92596 00:27:58.971 05:29:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 92596 ']' 00:27:58.971 05:29:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 92596 00:27:58.971 05:29:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:27:58.971 05:29:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:58.971 05:29:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92596 00:27:58.971 killing process with pid 92596 00:27:58.971 05:29:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:58.971 05:29:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:58.971 05:29:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92596' 00:27:58.971 05:29:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 92596 00:27:58.971 05:29:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 92596 00:27:58.971 [2024-11-10 05:29:52.125071] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:27:58.971 [2024-11-10 05:29:52.128357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.971 [2024-11-10 05:29:52.128392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:27:58.971 [2024-11-10 05:29:52.128404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:58.971 [2024-11-10 05:29:52.128410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.971 [2024-11-10 05:29:52.128428] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:27:58.971 [2024-11-10 05:29:52.128948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.971 [2024-11-10 05:29:52.128966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:27:58.971 [2024-11-10 05:29:52.128974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.508 ms 00:27:58.971 [2024-11-10 05:29:52.128980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.971 [2024-11-10 05:29:52.129299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.971 [2024-11-10 05:29:52.129331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:27:58.971 [2024-11-10 05:29:52.129349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.183 ms 00:27:58.971 [2024-11-10 05:29:52.129404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.971 [2024-11-10 05:29:52.130902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.971 [2024-11-10 05:29:52.130996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:27:58.971 [2024-11-10 05:29:52.131040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.470 ms 00:27:58.971 [2024-11-10 05:29:52.131059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.971 [2024-11-10 05:29:52.131963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.971 [2024-11-10 05:29:52.132043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:27:58.971 [2024-11-10 05:29:52.132106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.861 ms 00:27:58.971 [2024-11-10 05:29:52.132125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.971 [2024-11-10 05:29:52.133834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.971 [2024-11-10 05:29:52.133928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:27:58.971 [2024-11-10 05:29:52.133976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.662 ms 00:27:58.972 [2024-11-10 05:29:52.134003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.972 [2024-11-10 05:29:52.135459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.972 [2024-11-10 05:29:52.135549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:27:58.972 [2024-11-10 05:29:52.135606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.419 ms 00:27:58.972 [2024-11-10 05:29:52.135624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.972 [2024-11-10 05:29:52.135693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.972 [2024-11-10 05:29:52.135713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:27:58.972 [2024-11-10 05:29:52.135728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:27:58.972 [2024-11-10 05:29:52.135743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.972 [2024-11-10 05:29:52.137252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.972 [2024-11-10 05:29:52.137323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:27:58.972 [2024-11-10 05:29:52.137410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.489 ms 00:27:58.972 [2024-11-10 05:29:52.137455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.972 [2024-11-10 05:29:52.138575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.972 [2024-11-10 05:29:52.138659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:27:58.972 [2024-11-10 05:29:52.138709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.084 ms 00:27:58.972 [2024-11-10 05:29:52.138728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.972 [2024-11-10 05:29:52.140303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.972 [2024-11-10 05:29:52.140389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:27:58.972 [2024-11-10 05:29:52.140429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.543 ms 00:27:58.972 [2024-11-10 05:29:52.140466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.972 [2024-11-10 05:29:52.142191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.972 [2024-11-10 05:29:52.142274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:27:58.972 [2024-11-10 05:29:52.142314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.592 ms 00:27:58.972 [2024-11-10 05:29:52.142349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.972 [2024-11-10 05:29:52.142436] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:27:58.972 [2024-11-10 05:29:52.142479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:58.972 [2024-11-10 05:29:52.142554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:27:58.972 [2024-11-10 05:29:52.142600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:27:58.972 [2024-11-10 05:29:52.142626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:58.972 [2024-11-10 05:29:52.142648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:58.972 [2024-11-10 05:29:52.142671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:58.972 [2024-11-10 05:29:52.142746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:58.972 [2024-11-10 05:29:52.142769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:58.972 [2024-11-10 05:29:52.142792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:58.972 [2024-11-10 05:29:52.142814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:58.972 [2024-11-10 05:29:52.142866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:58.972 [2024-11-10 05:29:52.142888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:58.972 [2024-11-10 05:29:52.142909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:58.972 [2024-11-10 05:29:52.142951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:58.972 [2024-11-10 05:29:52.143001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:58.972 [2024-11-10 05:29:52.143040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:58.972 [2024-11-10 05:29:52.143065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:58.972 [2024-11-10 05:29:52.143088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:58.972 [2024-11-10 05:29:52.143112] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:27:58.972 [2024-11-10 05:29:52.143127] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 786d8882-3dd7-4f51-b235-b2b44e9b80fd 00:27:58.972 [2024-11-10 05:29:52.143151] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:27:58.972 [2024-11-10 05:29:52.143165] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:27:58.972 [2024-11-10 05:29:52.143218] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:27:58.972 [2024-11-10 05:29:52.143237] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:27:58.972 [2024-11-10 05:29:52.143252] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:27:58.972 [2024-11-10 05:29:52.143267] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:27:58.972 [2024-11-10 05:29:52.143282] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:27:58.972 [2024-11-10 05:29:52.143322] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:27:58.972 [2024-11-10 05:29:52.143339] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:27:58.972 [2024-11-10 05:29:52.143354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.972 [2024-11-10 05:29:52.143370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:27:58.972 [2024-11-10 05:29:52.143386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.919 ms 00:27:58.972 [2024-11-10 05:29:52.143405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.972 [2024-11-10 05:29:52.145152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.972 [2024-11-10 05:29:52.145235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:27:58.972 [2024-11-10 05:29:52.145247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.703 ms 00:27:58.972 [2024-11-10 05:29:52.145253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.972 [2024-11-10 05:29:52.145340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.972 [2024-11-10 05:29:52.145348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:27:58.972 [2024-11-10 05:29:52.145359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.072 ms 00:27:58.972 [2024-11-10 05:29:52.145364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.972 [2024-11-10 05:29:52.151412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:58.972 [2024-11-10 05:29:52.151441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:58.972 [2024-11-10 05:29:52.151450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:58.972 [2024-11-10 05:29:52.151455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.972 [2024-11-10 05:29:52.151482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:58.972 [2024-11-10 05:29:52.151489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:58.972 [2024-11-10 05:29:52.151499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:58.972 [2024-11-10 05:29:52.151505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.972 [2024-11-10 05:29:52.151549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:58.972 [2024-11-10 05:29:52.151557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:58.972 [2024-11-10 05:29:52.151563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:58.972 [2024-11-10 05:29:52.151569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.972 [2024-11-10 05:29:52.151584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:58.972 [2024-11-10 05:29:52.151591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:58.972 [2024-11-10 05:29:52.151598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:58.972 [2024-11-10 05:29:52.151606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.972 [2024-11-10 05:29:52.162365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:58.972 [2024-11-10 05:29:52.162405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:58.972 [2024-11-10 05:29:52.162413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:58.972 [2024-11-10 05:29:52.162420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.972 [2024-11-10 05:29:52.170936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:58.972 [2024-11-10 05:29:52.170974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:58.972 [2024-11-10 05:29:52.170985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:58.972 [2024-11-10 05:29:52.171005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.972 [2024-11-10 05:29:52.171065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:58.972 [2024-11-10 05:29:52.171073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:58.972 [2024-11-10 05:29:52.171080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:58.972 [2024-11-10 05:29:52.171085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.972 [2024-11-10 05:29:52.171113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:58.972 [2024-11-10 05:29:52.171120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:58.972 [2024-11-10 05:29:52.171130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:58.972 [2024-11-10 05:29:52.171136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.972 [2024-11-10 05:29:52.171208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:58.972 [2024-11-10 05:29:52.171219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:58.972 [2024-11-10 05:29:52.171225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:58.972 [2024-11-10 05:29:52.171232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.972 [2024-11-10 05:29:52.171262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:58.972 [2024-11-10 05:29:52.171270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:27:58.972 [2024-11-10 05:29:52.171276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:58.972 [2024-11-10 05:29:52.171282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.973 [2024-11-10 05:29:52.171320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:58.973 [2024-11-10 05:29:52.171328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:58.973 [2024-11-10 05:29:52.171334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:58.973 [2024-11-10 05:29:52.171341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.973 [2024-11-10 05:29:52.171381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:58.973 [2024-11-10 05:29:52.171390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:58.973 [2024-11-10 05:29:52.171397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:58.973 [2024-11-10 05:29:52.171403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.973 [2024-11-10 05:29:52.171517] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 43.132 ms, result 0 00:27:59.539 05:29:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:27:59.539 05:29:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:59.539 05:29:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:27:59.539 05:29:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:27:59.539 05:29:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:27:59.539 05:29:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:59.539 Remove shared memory files 00:27:59.539 05:29:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:27:59.539 05:29:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:59.539 05:29:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:27:59.539 05:29:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:27:59.539 05:29:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid92405 00:27:59.539 05:29:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:59.539 05:29:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:27:59.539 ************************************ 00:27:59.539 END TEST ftl_upgrade_shutdown 00:27:59.539 ************************************ 00:27:59.539 00:27:59.539 real 1m12.581s 00:27:59.539 user 1m36.450s 00:27:59.539 sys 0m22.006s 00:27:59.539 05:29:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:59.539 05:29:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:59.539 05:29:52 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:27:59.539 05:29:52 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:27:59.539 05:29:52 ftl -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:27:59.539 05:29:52 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:59.539 05:29:52 ftl -- common/autotest_common.sh@10 -- # set +x 00:27:59.539 ************************************ 00:27:59.539 START TEST ftl_restore_fast 00:27:59.539 ************************************ 00:27:59.539 05:29:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:27:59.539 * Looking for test storage... 00:27:59.539 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:27:59.539 05:29:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:27:59.539 05:29:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1681 -- # lcov --version 00:27:59.539 05:29:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:27:59.539 05:29:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:27:59.539 05:29:52 ftl.ftl_restore_fast -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:59.539 05:29:52 ftl.ftl_restore_fast -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:59.539 05:29:52 ftl.ftl_restore_fast -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:59.539 05:29:52 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # IFS=.-: 00:27:59.539 05:29:52 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # read -ra ver1 00:27:59.539 05:29:52 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # IFS=.-: 00:27:59.539 05:29:52 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # read -ra ver2 00:27:59.539 05:29:52 ftl.ftl_restore_fast -- scripts/common.sh@338 -- # local 'op=<' 00:27:59.539 05:29:52 ftl.ftl_restore_fast -- scripts/common.sh@340 -- # ver1_l=2 00:27:59.539 05:29:52 ftl.ftl_restore_fast -- scripts/common.sh@341 -- # ver2_l=1 00:27:59.539 05:29:52 ftl.ftl_restore_fast -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:59.539 05:29:52 ftl.ftl_restore_fast -- scripts/common.sh@344 -- # case "$op" in 00:27:59.539 05:29:52 ftl.ftl_restore_fast -- scripts/common.sh@345 -- # : 1 00:27:59.539 05:29:52 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:59.539 05:29:52 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:59.539 05:29:52 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # decimal 1 00:27:59.539 05:29:52 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=1 00:27:59.539 05:29:52 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:59.539 05:29:52 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 1 00:27:59.539 05:29:52 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # ver1[v]=1 00:27:59.539 05:29:52 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # decimal 2 00:27:59.539 05:29:52 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=2 00:27:59.539 05:29:52 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:59.539 05:29:52 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 2 00:27:59.539 05:29:52 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # ver2[v]=2 00:27:59.539 05:29:52 ftl.ftl_restore_fast -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:59.539 05:29:52 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:59.539 05:29:52 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # return 0 00:27:59.539 05:29:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:59.539 05:29:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:27:59.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:59.539 --rc genhtml_branch_coverage=1 00:27:59.539 --rc genhtml_function_coverage=1 00:27:59.539 --rc genhtml_legend=1 00:27:59.539 --rc geninfo_all_blocks=1 00:27:59.539 --rc geninfo_unexecuted_blocks=1 00:27:59.539 00:27:59.539 ' 00:27:59.539 05:29:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:27:59.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:59.539 --rc genhtml_branch_coverage=1 00:27:59.539 --rc genhtml_function_coverage=1 00:27:59.539 --rc genhtml_legend=1 00:27:59.539 --rc geninfo_all_blocks=1 00:27:59.539 --rc geninfo_unexecuted_blocks=1 00:27:59.539 00:27:59.539 ' 00:27:59.539 05:29:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:27:59.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:59.540 --rc genhtml_branch_coverage=1 00:27:59.540 --rc genhtml_function_coverage=1 00:27:59.540 --rc genhtml_legend=1 00:27:59.540 --rc geninfo_all_blocks=1 00:27:59.540 --rc geninfo_unexecuted_blocks=1 00:27:59.540 00:27:59.540 ' 00:27:59.540 05:29:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:27:59.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:59.540 --rc genhtml_branch_coverage=1 00:27:59.540 --rc genhtml_function_coverage=1 00:27:59.540 --rc genhtml_legend=1 00:27:59.540 --rc geninfo_all_blocks=1 00:27:59.540 --rc geninfo_unexecuted_blocks=1 00:27:59.540 00:27:59.540 ' 00:27:59.540 05:29:52 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:27:59.540 05:29:52 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:27:59.540 05:29:52 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:27:59.540 05:29:52 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:27:59.540 05:29:52 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:27:59.540 05:29:52 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:27:59.540 05:29:52 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:59.540 05:29:52 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:27:59.540 05:29:52 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:27:59.540 05:29:52 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:59.540 05:29:52 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:59.540 05:29:52 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:27:59.540 05:29:52 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:27:59.540 05:29:52 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:59.540 05:29:52 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:59.540 05:29:52 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:27:59.540 05:29:52 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:27:59.540 05:29:52 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:59.540 05:29:52 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:59.540 05:29:52 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:27:59.540 05:29:52 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:27:59.540 05:29:52 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:59.540 05:29:52 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:59.540 05:29:52 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:59.540 05:29:52 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:59.540 05:29:52 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:27:59.540 05:29:52 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:27:59.540 05:29:52 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:59.540 05:29:52 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:59.540 05:29:52 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:59.540 05:29:52 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:27:59.540 05:29:52 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.G2wZ99U2v8 00:27:59.540 05:29:52 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:27:59.540 05:29:52 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:27:59.540 05:29:52 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:27:59.540 05:29:52 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:27:59.540 05:29:52 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:27:59.540 05:29:52 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:27:59.540 05:29:52 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:27:59.540 05:29:52 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:27:59.540 05:29:52 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:27:59.540 05:29:52 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:27:59.540 05:29:52 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:27:59.540 05:29:52 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=92829 00:27:59.540 05:29:52 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 92829 00:27:59.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:59.540 05:29:52 ftl.ftl_restore_fast -- common/autotest_common.sh@831 -- # '[' -z 92829 ']' 00:27:59.540 05:29:52 ftl.ftl_restore_fast -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:59.540 05:29:52 ftl.ftl_restore_fast -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:59.540 05:29:52 ftl.ftl_restore_fast -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:59.540 05:29:52 ftl.ftl_restore_fast -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:59.540 05:29:52 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:27:59.540 05:29:52 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:59.800 [2024-11-10 05:29:52.805942] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:27:59.800 [2024-11-10 05:29:52.806060] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92829 ] 00:27:59.800 [2024-11-10 05:29:52.947143] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:59.800 [2024-11-10 05:29:52.989247] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:00.734 05:29:53 ftl.ftl_restore_fast -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:00.734 05:29:53 ftl.ftl_restore_fast -- common/autotest_common.sh@864 -- # return 0 00:28:00.734 05:29:53 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:28:00.734 05:29:53 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:28:00.734 05:29:53 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:28:00.734 05:29:53 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:28:00.734 05:29:53 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:28:00.734 05:29:53 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:28:00.734 05:29:53 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:28:00.734 05:29:53 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:28:00.734 05:29:53 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:28:00.734 05:29:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:28:00.734 05:29:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:28:00.734 05:29:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:28:00.734 05:29:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:28:00.734 05:29:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:28:00.992 05:29:54 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:28:00.992 { 00:28:00.992 "name": "nvme0n1", 00:28:00.992 "aliases": [ 00:28:00.992 "10536791-2807-40b6-a75f-7fab1f081875" 00:28:00.992 ], 00:28:00.992 "product_name": "NVMe disk", 00:28:00.992 "block_size": 4096, 00:28:00.992 "num_blocks": 1310720, 00:28:00.992 "uuid": "10536791-2807-40b6-a75f-7fab1f081875", 00:28:00.992 "numa_id": -1, 00:28:00.992 "assigned_rate_limits": { 00:28:00.992 "rw_ios_per_sec": 0, 00:28:00.992 "rw_mbytes_per_sec": 0, 00:28:00.992 "r_mbytes_per_sec": 0, 00:28:00.992 "w_mbytes_per_sec": 0 00:28:00.992 }, 00:28:00.992 "claimed": true, 00:28:00.992 "claim_type": "read_many_write_one", 00:28:00.992 "zoned": false, 00:28:00.992 "supported_io_types": { 00:28:00.992 "read": true, 00:28:00.992 "write": true, 00:28:00.992 "unmap": true, 00:28:00.992 "flush": true, 00:28:00.992 "reset": true, 00:28:00.992 "nvme_admin": true, 00:28:00.992 "nvme_io": true, 00:28:00.992 "nvme_io_md": false, 00:28:00.992 "write_zeroes": true, 00:28:00.992 "zcopy": false, 00:28:00.992 "get_zone_info": false, 00:28:00.992 "zone_management": false, 00:28:00.992 "zone_append": false, 00:28:00.992 "compare": true, 00:28:00.992 "compare_and_write": false, 00:28:00.992 "abort": true, 00:28:00.992 "seek_hole": false, 00:28:00.992 "seek_data": false, 00:28:00.992 "copy": true, 00:28:00.992 "nvme_iov_md": false 00:28:00.992 }, 00:28:00.992 "driver_specific": { 00:28:00.992 "nvme": [ 00:28:00.992 { 00:28:00.992 "pci_address": "0000:00:11.0", 00:28:00.992 "trid": { 00:28:00.992 "trtype": "PCIe", 00:28:00.992 "traddr": "0000:00:11.0" 00:28:00.992 }, 00:28:00.992 "ctrlr_data": { 00:28:00.992 "cntlid": 0, 00:28:00.992 "vendor_id": "0x1b36", 00:28:00.992 "model_number": "QEMU NVMe Ctrl", 00:28:00.992 "serial_number": "12341", 00:28:00.992 "firmware_revision": "8.0.0", 00:28:00.992 "subnqn": "nqn.2019-08.org.qemu:12341", 00:28:00.992 "oacs": { 00:28:00.992 "security": 0, 00:28:00.992 "format": 1, 00:28:00.992 "firmware": 0, 00:28:00.992 "ns_manage": 1 00:28:00.992 }, 00:28:00.992 "multi_ctrlr": false, 00:28:00.992 "ana_reporting": false 00:28:00.992 }, 00:28:00.992 "vs": { 00:28:00.992 "nvme_version": "1.4" 00:28:00.992 }, 00:28:00.992 "ns_data": { 00:28:00.992 "id": 1, 00:28:00.992 "can_share": false 00:28:00.992 } 00:28:00.992 } 00:28:00.992 ], 00:28:00.992 "mp_policy": "active_passive" 00:28:00.992 } 00:28:00.992 } 00:28:00.992 ]' 00:28:00.992 05:29:54 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:28:00.992 05:29:54 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:28:00.992 05:29:54 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:28:00.992 05:29:54 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=1310720 00:28:00.992 05:29:54 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:28:00.992 05:29:54 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 5120 00:28:00.992 05:29:54 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:28:00.992 05:29:54 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:28:00.992 05:29:54 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:28:00.992 05:29:54 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:00.992 05:29:54 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:28:01.250 05:29:54 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=39d1bbaa-b691-47ae-80b9-e769a2966711 00:28:01.250 05:29:54 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:28:01.250 05:29:54 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 39d1bbaa-b691-47ae-80b9-e769a2966711 00:28:01.507 05:29:54 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:28:01.765 05:29:54 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=0b627628-32c8-4dc7-9726-c2fd0d2c3421 00:28:01.765 05:29:54 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 0b627628-32c8-4dc7-9726-c2fd0d2c3421 00:28:01.765 05:29:54 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=392c8fd7-8334-4b52-8f41-57b5142b3ee4 00:28:01.765 05:29:54 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:28:01.765 05:29:54 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 392c8fd7-8334-4b52-8f41-57b5142b3ee4 00:28:01.765 05:29:54 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:28:01.765 05:29:54 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:28:01.765 05:29:54 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=392c8fd7-8334-4b52-8f41-57b5142b3ee4 00:28:01.765 05:29:54 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:28:01.765 05:29:54 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size 392c8fd7-8334-4b52-8f41-57b5142b3ee4 00:28:01.765 05:29:54 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=392c8fd7-8334-4b52-8f41-57b5142b3ee4 00:28:01.765 05:29:54 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:28:01.765 05:29:54 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:28:01.765 05:29:54 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:28:01.765 05:29:54 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 392c8fd7-8334-4b52-8f41-57b5142b3ee4 00:28:02.024 05:29:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:28:02.024 { 00:28:02.024 "name": "392c8fd7-8334-4b52-8f41-57b5142b3ee4", 00:28:02.024 "aliases": [ 00:28:02.024 "lvs/nvme0n1p0" 00:28:02.024 ], 00:28:02.024 "product_name": "Logical Volume", 00:28:02.024 "block_size": 4096, 00:28:02.024 "num_blocks": 26476544, 00:28:02.024 "uuid": "392c8fd7-8334-4b52-8f41-57b5142b3ee4", 00:28:02.024 "assigned_rate_limits": { 00:28:02.024 "rw_ios_per_sec": 0, 00:28:02.024 "rw_mbytes_per_sec": 0, 00:28:02.024 "r_mbytes_per_sec": 0, 00:28:02.024 "w_mbytes_per_sec": 0 00:28:02.024 }, 00:28:02.024 "claimed": false, 00:28:02.024 "zoned": false, 00:28:02.024 "supported_io_types": { 00:28:02.024 "read": true, 00:28:02.024 "write": true, 00:28:02.024 "unmap": true, 00:28:02.024 "flush": false, 00:28:02.024 "reset": true, 00:28:02.024 "nvme_admin": false, 00:28:02.024 "nvme_io": false, 00:28:02.024 "nvme_io_md": false, 00:28:02.024 "write_zeroes": true, 00:28:02.024 "zcopy": false, 00:28:02.024 "get_zone_info": false, 00:28:02.024 "zone_management": false, 00:28:02.024 "zone_append": false, 00:28:02.024 "compare": false, 00:28:02.024 "compare_and_write": false, 00:28:02.024 "abort": false, 00:28:02.024 "seek_hole": true, 00:28:02.024 "seek_data": true, 00:28:02.024 "copy": false, 00:28:02.024 "nvme_iov_md": false 00:28:02.024 }, 00:28:02.024 "driver_specific": { 00:28:02.024 "lvol": { 00:28:02.024 "lvol_store_uuid": "0b627628-32c8-4dc7-9726-c2fd0d2c3421", 00:28:02.024 "base_bdev": "nvme0n1", 00:28:02.024 "thin_provision": true, 00:28:02.024 "num_allocated_clusters": 0, 00:28:02.024 "snapshot": false, 00:28:02.024 "clone": false, 00:28:02.024 "esnap_clone": false 00:28:02.024 } 00:28:02.024 } 00:28:02.024 } 00:28:02.024 ]' 00:28:02.024 05:29:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:28:02.024 05:29:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:28:02.024 05:29:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:28:02.024 05:29:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:28:02.024 05:29:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:28:02.024 05:29:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:28:02.024 05:29:55 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:28:02.024 05:29:55 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:28:02.024 05:29:55 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:28:02.281 05:29:55 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:28:02.281 05:29:55 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:28:02.281 05:29:55 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size 392c8fd7-8334-4b52-8f41-57b5142b3ee4 00:28:02.281 05:29:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=392c8fd7-8334-4b52-8f41-57b5142b3ee4 00:28:02.281 05:29:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:28:02.281 05:29:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:28:02.281 05:29:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:28:02.281 05:29:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 392c8fd7-8334-4b52-8f41-57b5142b3ee4 00:28:02.538 05:29:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:28:02.538 { 00:28:02.538 "name": "392c8fd7-8334-4b52-8f41-57b5142b3ee4", 00:28:02.538 "aliases": [ 00:28:02.538 "lvs/nvme0n1p0" 00:28:02.538 ], 00:28:02.538 "product_name": "Logical Volume", 00:28:02.538 "block_size": 4096, 00:28:02.538 "num_blocks": 26476544, 00:28:02.538 "uuid": "392c8fd7-8334-4b52-8f41-57b5142b3ee4", 00:28:02.538 "assigned_rate_limits": { 00:28:02.538 "rw_ios_per_sec": 0, 00:28:02.538 "rw_mbytes_per_sec": 0, 00:28:02.538 "r_mbytes_per_sec": 0, 00:28:02.538 "w_mbytes_per_sec": 0 00:28:02.538 }, 00:28:02.538 "claimed": false, 00:28:02.538 "zoned": false, 00:28:02.538 "supported_io_types": { 00:28:02.538 "read": true, 00:28:02.538 "write": true, 00:28:02.538 "unmap": true, 00:28:02.538 "flush": false, 00:28:02.538 "reset": true, 00:28:02.538 "nvme_admin": false, 00:28:02.538 "nvme_io": false, 00:28:02.538 "nvme_io_md": false, 00:28:02.538 "write_zeroes": true, 00:28:02.538 "zcopy": false, 00:28:02.538 "get_zone_info": false, 00:28:02.538 "zone_management": false, 00:28:02.538 "zone_append": false, 00:28:02.538 "compare": false, 00:28:02.538 "compare_and_write": false, 00:28:02.538 "abort": false, 00:28:02.538 "seek_hole": true, 00:28:02.538 "seek_data": true, 00:28:02.538 "copy": false, 00:28:02.538 "nvme_iov_md": false 00:28:02.538 }, 00:28:02.538 "driver_specific": { 00:28:02.538 "lvol": { 00:28:02.538 "lvol_store_uuid": "0b627628-32c8-4dc7-9726-c2fd0d2c3421", 00:28:02.538 "base_bdev": "nvme0n1", 00:28:02.538 "thin_provision": true, 00:28:02.538 "num_allocated_clusters": 0, 00:28:02.538 "snapshot": false, 00:28:02.538 "clone": false, 00:28:02.538 "esnap_clone": false 00:28:02.538 } 00:28:02.538 } 00:28:02.538 } 00:28:02.538 ]' 00:28:02.538 05:29:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:28:02.538 05:29:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:28:02.538 05:29:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:28:02.538 05:29:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:28:02.538 05:29:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:28:02.538 05:29:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:28:02.538 05:29:55 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:28:02.538 05:29:55 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:28:02.796 05:29:55 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:28:02.796 05:29:55 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size 392c8fd7-8334-4b52-8f41-57b5142b3ee4 00:28:02.796 05:29:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=392c8fd7-8334-4b52-8f41-57b5142b3ee4 00:28:02.796 05:29:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:28:02.796 05:29:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:28:02.796 05:29:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:28:02.796 05:29:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 392c8fd7-8334-4b52-8f41-57b5142b3ee4 00:28:03.054 05:29:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:28:03.054 { 00:28:03.054 "name": "392c8fd7-8334-4b52-8f41-57b5142b3ee4", 00:28:03.054 "aliases": [ 00:28:03.054 "lvs/nvme0n1p0" 00:28:03.054 ], 00:28:03.054 "product_name": "Logical Volume", 00:28:03.054 "block_size": 4096, 00:28:03.054 "num_blocks": 26476544, 00:28:03.054 "uuid": "392c8fd7-8334-4b52-8f41-57b5142b3ee4", 00:28:03.054 "assigned_rate_limits": { 00:28:03.054 "rw_ios_per_sec": 0, 00:28:03.054 "rw_mbytes_per_sec": 0, 00:28:03.054 "r_mbytes_per_sec": 0, 00:28:03.054 "w_mbytes_per_sec": 0 00:28:03.054 }, 00:28:03.054 "claimed": false, 00:28:03.054 "zoned": false, 00:28:03.054 "supported_io_types": { 00:28:03.054 "read": true, 00:28:03.054 "write": true, 00:28:03.054 "unmap": true, 00:28:03.054 "flush": false, 00:28:03.054 "reset": true, 00:28:03.054 "nvme_admin": false, 00:28:03.054 "nvme_io": false, 00:28:03.054 "nvme_io_md": false, 00:28:03.054 "write_zeroes": true, 00:28:03.054 "zcopy": false, 00:28:03.054 "get_zone_info": false, 00:28:03.054 "zone_management": false, 00:28:03.054 "zone_append": false, 00:28:03.054 "compare": false, 00:28:03.054 "compare_and_write": false, 00:28:03.054 "abort": false, 00:28:03.054 "seek_hole": true, 00:28:03.054 "seek_data": true, 00:28:03.054 "copy": false, 00:28:03.054 "nvme_iov_md": false 00:28:03.054 }, 00:28:03.054 "driver_specific": { 00:28:03.054 "lvol": { 00:28:03.054 "lvol_store_uuid": "0b627628-32c8-4dc7-9726-c2fd0d2c3421", 00:28:03.054 "base_bdev": "nvme0n1", 00:28:03.054 "thin_provision": true, 00:28:03.054 "num_allocated_clusters": 0, 00:28:03.054 "snapshot": false, 00:28:03.054 "clone": false, 00:28:03.054 "esnap_clone": false 00:28:03.054 } 00:28:03.054 } 00:28:03.054 } 00:28:03.054 ]' 00:28:03.054 05:29:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:28:03.054 05:29:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:28:03.054 05:29:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:28:03.054 05:29:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:28:03.054 05:29:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:28:03.054 05:29:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:28:03.054 05:29:56 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:28:03.054 05:29:56 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 392c8fd7-8334-4b52-8f41-57b5142b3ee4 --l2p_dram_limit 10' 00:28:03.054 05:29:56 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:28:03.054 05:29:56 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:28:03.054 05:29:56 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:28:03.054 05:29:56 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:28:03.054 05:29:56 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:28:03.054 05:29:56 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 392c8fd7-8334-4b52-8f41-57b5142b3ee4 --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:28:03.054 [2024-11-10 05:29:56.280660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.054 [2024-11-10 05:29:56.280705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:03.054 [2024-11-10 05:29:56.280718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:03.054 [2024-11-10 05:29:56.280726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.054 [2024-11-10 05:29:56.280764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.054 [2024-11-10 05:29:56.280773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:03.054 [2024-11-10 05:29:56.280780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:28:03.054 [2024-11-10 05:29:56.280790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.054 [2024-11-10 05:29:56.280809] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:03.054 [2024-11-10 05:29:56.281013] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:03.054 [2024-11-10 05:29:56.281026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.054 [2024-11-10 05:29:56.281034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:03.054 [2024-11-10 05:29:56.281046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.223 ms 00:28:03.054 [2024-11-10 05:29:56.281054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.054 [2024-11-10 05:29:56.281077] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 5d7d5206-51f2-499a-8e45-dc97d339ed69 00:28:03.054 [2024-11-10 05:29:56.282363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.054 [2024-11-10 05:29:56.282385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:28:03.054 [2024-11-10 05:29:56.282396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:28:03.054 [2024-11-10 05:29:56.282403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.315 [2024-11-10 05:29:56.289686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.315 [2024-11-10 05:29:56.289710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:03.315 [2024-11-10 05:29:56.289723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.217 ms 00:28:03.315 [2024-11-10 05:29:56.289729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.315 [2024-11-10 05:29:56.289828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.315 [2024-11-10 05:29:56.289836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:03.315 [2024-11-10 05:29:56.289845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:28:03.315 [2024-11-10 05:29:56.289853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.315 [2024-11-10 05:29:56.289889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.315 [2024-11-10 05:29:56.289899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:03.315 [2024-11-10 05:29:56.289911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:03.315 [2024-11-10 05:29:56.289917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.315 [2024-11-10 05:29:56.289938] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:03.315 [2024-11-10 05:29:56.291583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.315 [2024-11-10 05:29:56.291608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:03.315 [2024-11-10 05:29:56.291617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.654 ms 00:28:03.315 [2024-11-10 05:29:56.291628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.315 [2024-11-10 05:29:56.291656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.315 [2024-11-10 05:29:56.291665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:03.315 [2024-11-10 05:29:56.291673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:28:03.315 [2024-11-10 05:29:56.291684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.315 [2024-11-10 05:29:56.291697] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:28:03.315 [2024-11-10 05:29:56.291815] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:03.315 [2024-11-10 05:29:56.291830] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:03.315 [2024-11-10 05:29:56.291842] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:03.315 [2024-11-10 05:29:56.291853] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:03.315 [2024-11-10 05:29:56.291863] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:03.315 [2024-11-10 05:29:56.291870] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:03.315 [2024-11-10 05:29:56.291880] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:03.315 [2024-11-10 05:29:56.291886] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:03.315 [2024-11-10 05:29:56.291896] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:03.315 [2024-11-10 05:29:56.291904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.315 [2024-11-10 05:29:56.291911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:03.315 [2024-11-10 05:29:56.291918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.208 ms 00:28:03.315 [2024-11-10 05:29:56.291926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.315 [2024-11-10 05:29:56.292011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.315 [2024-11-10 05:29:56.292022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:03.315 [2024-11-10 05:29:56.292028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:28:03.315 [2024-11-10 05:29:56.292036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.315 [2024-11-10 05:29:56.292111] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:03.315 [2024-11-10 05:29:56.292122] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:03.315 [2024-11-10 05:29:56.292128] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:03.315 [2024-11-10 05:29:56.292136] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:03.315 [2024-11-10 05:29:56.292142] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:03.315 [2024-11-10 05:29:56.292149] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:03.315 [2024-11-10 05:29:56.292155] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:03.315 [2024-11-10 05:29:56.292163] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:03.315 [2024-11-10 05:29:56.292169] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:03.315 [2024-11-10 05:29:56.292175] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:03.315 [2024-11-10 05:29:56.292181] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:03.315 [2024-11-10 05:29:56.292189] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:03.315 [2024-11-10 05:29:56.292194] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:03.315 [2024-11-10 05:29:56.292203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:03.315 [2024-11-10 05:29:56.292208] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:03.315 [2024-11-10 05:29:56.292217] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:03.315 [2024-11-10 05:29:56.292222] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:03.315 [2024-11-10 05:29:56.292229] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:03.315 [2024-11-10 05:29:56.292235] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:03.315 [2024-11-10 05:29:56.292244] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:03.315 [2024-11-10 05:29:56.292250] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:03.315 [2024-11-10 05:29:56.292258] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:03.315 [2024-11-10 05:29:56.292263] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:03.315 [2024-11-10 05:29:56.292271] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:03.315 [2024-11-10 05:29:56.292277] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:03.315 [2024-11-10 05:29:56.292284] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:03.315 [2024-11-10 05:29:56.292291] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:03.315 [2024-11-10 05:29:56.292299] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:03.315 [2024-11-10 05:29:56.292305] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:03.315 [2024-11-10 05:29:56.292313] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:03.315 [2024-11-10 05:29:56.292319] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:03.315 [2024-11-10 05:29:56.292327] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:03.315 [2024-11-10 05:29:56.292333] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:03.315 [2024-11-10 05:29:56.292341] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:03.315 [2024-11-10 05:29:56.292347] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:03.315 [2024-11-10 05:29:56.292354] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:03.315 [2024-11-10 05:29:56.292360] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:03.315 [2024-11-10 05:29:56.292368] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:03.315 [2024-11-10 05:29:56.292374] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:03.315 [2024-11-10 05:29:56.292381] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:03.315 [2024-11-10 05:29:56.292387] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:03.315 [2024-11-10 05:29:56.292395] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:03.315 [2024-11-10 05:29:56.292401] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:03.315 [2024-11-10 05:29:56.292408] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:03.315 [2024-11-10 05:29:56.292414] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:03.315 [2024-11-10 05:29:56.292427] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:03.315 [2024-11-10 05:29:56.292436] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:03.315 [2024-11-10 05:29:56.292448] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:03.315 [2024-11-10 05:29:56.292455] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:03.315 [2024-11-10 05:29:56.292462] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:03.315 [2024-11-10 05:29:56.292468] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:03.315 [2024-11-10 05:29:56.292476] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:03.315 [2024-11-10 05:29:56.292482] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:03.315 [2024-11-10 05:29:56.292494] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:03.315 [2024-11-10 05:29:56.292504] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:03.315 [2024-11-10 05:29:56.292514] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:03.315 [2024-11-10 05:29:56.292520] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:03.315 [2024-11-10 05:29:56.292528] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:03.316 [2024-11-10 05:29:56.292535] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:03.316 [2024-11-10 05:29:56.292542] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:03.316 [2024-11-10 05:29:56.292549] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:03.316 [2024-11-10 05:29:56.292559] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:03.316 [2024-11-10 05:29:56.292565] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:03.316 [2024-11-10 05:29:56.292572] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:03.316 [2024-11-10 05:29:56.292579] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:03.316 [2024-11-10 05:29:56.292588] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:03.316 [2024-11-10 05:29:56.292594] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:03.316 [2024-11-10 05:29:56.292602] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:03.316 [2024-11-10 05:29:56.292609] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:03.316 [2024-11-10 05:29:56.292616] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:03.316 [2024-11-10 05:29:56.292626] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:03.316 [2024-11-10 05:29:56.292634] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:03.316 [2024-11-10 05:29:56.292640] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:03.316 [2024-11-10 05:29:56.292646] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:03.316 [2024-11-10 05:29:56.292652] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:03.316 [2024-11-10 05:29:56.292660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.316 [2024-11-10 05:29:56.292667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:03.316 [2024-11-10 05:29:56.292678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.600 ms 00:28:03.316 [2024-11-10 05:29:56.292683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.316 [2024-11-10 05:29:56.292722] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:28:03.316 [2024-11-10 05:29:56.292731] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:28:07.517 [2024-11-10 05:30:00.134834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.517 [2024-11-10 05:30:00.134967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:28:07.517 [2024-11-10 05:30:00.135052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3842.076 ms 00:28:07.517 [2024-11-10 05:30:00.135080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.517 [2024-11-10 05:30:00.154942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.517 [2024-11-10 05:30:00.155018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:07.517 [2024-11-10 05:30:00.155039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.644 ms 00:28:07.517 [2024-11-10 05:30:00.155049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.517 [2024-11-10 05:30:00.155221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.517 [2024-11-10 05:30:00.155234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:07.517 [2024-11-10 05:30:00.155252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:28:07.517 [2024-11-10 05:30:00.155261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.517 [2024-11-10 05:30:00.171308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.517 [2024-11-10 05:30:00.171356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:07.517 [2024-11-10 05:30:00.171372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.004 ms 00:28:07.517 [2024-11-10 05:30:00.171381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.517 [2024-11-10 05:30:00.171425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.517 [2024-11-10 05:30:00.171439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:07.517 [2024-11-10 05:30:00.171452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:07.517 [2024-11-10 05:30:00.171466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.517 [2024-11-10 05:30:00.172244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.517 [2024-11-10 05:30:00.172281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:07.517 [2024-11-10 05:30:00.172301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.715 ms 00:28:07.517 [2024-11-10 05:30:00.172311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.517 [2024-11-10 05:30:00.172449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.517 [2024-11-10 05:30:00.172469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:07.517 [2024-11-10 05:30:00.172490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:28:07.517 [2024-11-10 05:30:00.172510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.517 [2024-11-10 05:30:00.199397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.517 [2024-11-10 05:30:00.199451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:07.517 [2024-11-10 05:30:00.199469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.855 ms 00:28:07.517 [2024-11-10 05:30:00.199479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.517 [2024-11-10 05:30:00.210925] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:07.517 [2024-11-10 05:30:00.216005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.517 [2024-11-10 05:30:00.216054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:07.517 [2024-11-10 05:30:00.216069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.363 ms 00:28:07.517 [2024-11-10 05:30:00.216082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.517 [2024-11-10 05:30:00.305737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.517 [2024-11-10 05:30:00.305795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:28:07.517 [2024-11-10 05:30:00.305808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 89.618 ms 00:28:07.517 [2024-11-10 05:30:00.305824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.517 [2024-11-10 05:30:00.306056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.517 [2024-11-10 05:30:00.306073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:07.517 [2024-11-10 05:30:00.306083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.191 ms 00:28:07.517 [2024-11-10 05:30:00.306096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.517 [2024-11-10 05:30:00.312160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.517 [2024-11-10 05:30:00.312343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:28:07.517 [2024-11-10 05:30:00.312355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.024 ms 00:28:07.517 [2024-11-10 05:30:00.312367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.517 [2024-11-10 05:30:00.317668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.517 [2024-11-10 05:30:00.317719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:28:07.517 [2024-11-10 05:30:00.317730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.268 ms 00:28:07.517 [2024-11-10 05:30:00.317741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.517 [2024-11-10 05:30:00.318106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.517 [2024-11-10 05:30:00.318123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:07.517 [2024-11-10 05:30:00.318133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.340 ms 00:28:07.517 [2024-11-10 05:30:00.318146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.517 [2024-11-10 05:30:00.364874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.517 [2024-11-10 05:30:00.364930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:28:07.517 [2024-11-10 05:30:00.364943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.689 ms 00:28:07.517 [2024-11-10 05:30:00.364955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.517 [2024-11-10 05:30:00.372983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.517 [2024-11-10 05:30:00.373065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:28:07.517 [2024-11-10 05:30:00.373077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.929 ms 00:28:07.517 [2024-11-10 05:30:00.373089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.517 [2024-11-10 05:30:00.379109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.517 [2024-11-10 05:30:00.379158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:28:07.517 [2024-11-10 05:30:00.379169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.968 ms 00:28:07.517 [2024-11-10 05:30:00.379182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.517 [2024-11-10 05:30:00.385750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.517 [2024-11-10 05:30:00.385802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:07.517 [2024-11-10 05:30:00.385812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.519 ms 00:28:07.517 [2024-11-10 05:30:00.385828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.517 [2024-11-10 05:30:00.385884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.517 [2024-11-10 05:30:00.385897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:07.517 [2024-11-10 05:30:00.385908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:28:07.517 [2024-11-10 05:30:00.385920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.517 [2024-11-10 05:30:00.386059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.517 [2024-11-10 05:30:00.386076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:07.517 [2024-11-10 05:30:00.386087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:28:07.517 [2024-11-10 05:30:00.386109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.517 [2024-11-10 05:30:00.387481] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4106.208 ms, result 0 00:28:07.517 { 00:28:07.517 "name": "ftl0", 00:28:07.517 "uuid": "5d7d5206-51f2-499a-8e45-dc97d339ed69" 00:28:07.517 } 00:28:07.517 05:30:00 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:28:07.517 05:30:00 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:28:07.517 05:30:00 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:28:07.517 05:30:00 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:28:07.781 [2024-11-10 05:30:00.826657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.781 [2024-11-10 05:30:00.826706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:07.781 [2024-11-10 05:30:00.826722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:07.781 [2024-11-10 05:30:00.826731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.781 [2024-11-10 05:30:00.826763] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:07.781 [2024-11-10 05:30:00.827765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.781 [2024-11-10 05:30:00.827818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:07.781 [2024-11-10 05:30:00.827832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.985 ms 00:28:07.781 [2024-11-10 05:30:00.827845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.781 [2024-11-10 05:30:00.828151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.781 [2024-11-10 05:30:00.828176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:07.781 [2024-11-10 05:30:00.828186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:28:07.781 [2024-11-10 05:30:00.828198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.781 [2024-11-10 05:30:00.831447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.781 [2024-11-10 05:30:00.831476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:07.781 [2024-11-10 05:30:00.831487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.231 ms 00:28:07.781 [2024-11-10 05:30:00.831497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.781 [2024-11-10 05:30:00.837716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.781 [2024-11-10 05:30:00.837756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:07.781 [2024-11-10 05:30:00.837769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.199 ms 00:28:07.781 [2024-11-10 05:30:00.837780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.781 [2024-11-10 05:30:00.840987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.781 [2024-11-10 05:30:00.841053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:07.781 [2024-11-10 05:30:00.841064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.116 ms 00:28:07.781 [2024-11-10 05:30:00.841079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.781 [2024-11-10 05:30:00.848646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.781 [2024-11-10 05:30:00.848698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:07.781 [2024-11-10 05:30:00.848711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.517 ms 00:28:07.781 [2024-11-10 05:30:00.848726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.781 [2024-11-10 05:30:00.848873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.781 [2024-11-10 05:30:00.848888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:07.781 [2024-11-10 05:30:00.848898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:28:07.781 [2024-11-10 05:30:00.848910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.781 [2024-11-10 05:30:00.852275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.781 [2024-11-10 05:30:00.852327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:07.781 [2024-11-10 05:30:00.852337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.340 ms 00:28:07.781 [2024-11-10 05:30:00.852347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.781 [2024-11-10 05:30:00.855621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.781 [2024-11-10 05:30:00.855672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:07.781 [2024-11-10 05:30:00.855683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.222 ms 00:28:07.781 [2024-11-10 05:30:00.855694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.781 [2024-11-10 05:30:00.858049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.781 [2024-11-10 05:30:00.858098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:07.782 [2024-11-10 05:30:00.858109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.307 ms 00:28:07.782 [2024-11-10 05:30:00.858121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.782 [2024-11-10 05:30:00.860332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.782 [2024-11-10 05:30:00.860394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:07.782 [2024-11-10 05:30:00.860404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.117 ms 00:28:07.782 [2024-11-10 05:30:00.860414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.782 [2024-11-10 05:30:00.860460] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:07.782 [2024-11-10 05:30:00.860485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:07.782 [2024-11-10 05:30:00.860497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:07.782 [2024-11-10 05:30:00.860509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:07.782 [2024-11-10 05:30:00.860518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:07.782 [2024-11-10 05:30:00.860533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:07.782 [2024-11-10 05:30:00.860543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:07.782 [2024-11-10 05:30:00.860555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:07.782 [2024-11-10 05:30:00.860563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:07.782 [2024-11-10 05:30:00.860575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:07.782 [2024-11-10 05:30:00.860584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:07.782 [2024-11-10 05:30:00.860595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:07.782 [2024-11-10 05:30:00.860605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:07.782 [2024-11-10 05:30:00.860615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:07.782 [2024-11-10 05:30:00.860623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:07.782 [2024-11-10 05:30:00.860634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:07.782 [2024-11-10 05:30:00.860643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:07.782 [2024-11-10 05:30:00.860653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:07.782 [2024-11-10 05:30:00.860661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:07.782 [2024-11-10 05:30:00.860671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:07.782 [2024-11-10 05:30:00.860678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:07.782 [2024-11-10 05:30:00.860691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:07.782 [2024-11-10 05:30:00.860699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:07.782 [2024-11-10 05:30:00.860709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:07.782 [2024-11-10 05:30:00.860720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:07.782 [2024-11-10 05:30:00.860730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:07.782 [2024-11-10 05:30:00.860738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:07.782 [2024-11-10 05:30:00.860750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:07.782 [2024-11-10 05:30:00.860761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:07.782 [2024-11-10 05:30:00.860774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:07.782 [2024-11-10 05:30:00.860783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:07.782 [2024-11-10 05:30:00.860795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:07.782 [2024-11-10 05:30:00.860804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:07.782 [2024-11-10 05:30:00.860816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:07.782 [2024-11-10 05:30:00.860824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:07.782 [2024-11-10 05:30:00.860836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:07.782 [2024-11-10 05:30:00.860845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:07.782 [2024-11-10 05:30:00.860859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:07.782 [2024-11-10 05:30:00.860868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:07.782 [2024-11-10 05:30:00.860879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:07.782 [2024-11-10 05:30:00.860887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:07.782 [2024-11-10 05:30:00.860898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:07.782 [2024-11-10 05:30:00.860906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:07.782 [2024-11-10 05:30:00.860917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:07.782 [2024-11-10 05:30:00.860926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:07.782 [2024-11-10 05:30:00.860937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:07.782 [2024-11-10 05:30:00.860945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:07.782 [2024-11-10 05:30:00.860956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:07.782 [2024-11-10 05:30:00.860964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:07.782 [2024-11-10 05:30:00.860975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:07.782 [2024-11-10 05:30:00.860983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:07.782 [2024-11-10 05:30:00.861012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:07.782 [2024-11-10 05:30:00.861020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:07.782 [2024-11-10 05:30:00.861035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:07.782 [2024-11-10 05:30:00.861044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:07.782 [2024-11-10 05:30:00.861054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:07.782 [2024-11-10 05:30:00.861062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:07.782 [2024-11-10 05:30:00.861072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:07.782 [2024-11-10 05:30:00.861079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:07.782 [2024-11-10 05:30:00.861089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:07.782 [2024-11-10 05:30:00.861098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:07.782 [2024-11-10 05:30:00.861110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:07.782 [2024-11-10 05:30:00.861119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:07.782 [2024-11-10 05:30:00.861131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:07.782 [2024-11-10 05:30:00.861141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:07.782 [2024-11-10 05:30:00.861170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:07.782 [2024-11-10 05:30:00.861180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:07.782 [2024-11-10 05:30:00.861190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:07.782 [2024-11-10 05:30:00.861199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:07.782 [2024-11-10 05:30:00.861213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:07.782 [2024-11-10 05:30:00.861221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:07.782 [2024-11-10 05:30:00.861232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:07.782 [2024-11-10 05:30:00.861240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:07.782 [2024-11-10 05:30:00.861250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:07.782 [2024-11-10 05:30:00.861258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:07.782 [2024-11-10 05:30:00.861268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:07.782 [2024-11-10 05:30:00.861276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:07.782 [2024-11-10 05:30:00.861287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:07.782 [2024-11-10 05:30:00.861295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:07.782 [2024-11-10 05:30:00.861306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:07.782 [2024-11-10 05:30:00.861315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:07.782 [2024-11-10 05:30:00.861326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:07.782 [2024-11-10 05:30:00.861333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:07.782 [2024-11-10 05:30:00.861343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:07.783 [2024-11-10 05:30:00.861351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:07.783 [2024-11-10 05:30:00.861365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:07.783 [2024-11-10 05:30:00.861374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:07.783 [2024-11-10 05:30:00.861385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:07.783 [2024-11-10 05:30:00.861392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:07.783 [2024-11-10 05:30:00.861403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:07.783 [2024-11-10 05:30:00.861410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:07.783 [2024-11-10 05:30:00.861420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:07.783 [2024-11-10 05:30:00.861430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:07.783 [2024-11-10 05:30:00.861442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:07.783 [2024-11-10 05:30:00.861452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:07.783 [2024-11-10 05:30:00.861462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:07.783 [2024-11-10 05:30:00.861471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:07.783 [2024-11-10 05:30:00.861481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:07.783 [2024-11-10 05:30:00.861489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:07.783 [2024-11-10 05:30:00.861499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:07.783 [2024-11-10 05:30:00.861507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:07.783 [2024-11-10 05:30:00.861528] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:07.783 [2024-11-10 05:30:00.861537] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5d7d5206-51f2-499a-8e45-dc97d339ed69 00:28:07.783 [2024-11-10 05:30:00.861548] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:07.783 [2024-11-10 05:30:00.861557] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:07.783 [2024-11-10 05:30:00.861569] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:07.783 [2024-11-10 05:30:00.861579] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:07.783 [2024-11-10 05:30:00.861589] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:07.783 [2024-11-10 05:30:00.861598] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:07.783 [2024-11-10 05:30:00.861609] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:07.783 [2024-11-10 05:30:00.861616] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:07.783 [2024-11-10 05:30:00.861624] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:07.783 [2024-11-10 05:30:00.861632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.783 [2024-11-10 05:30:00.861646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:07.783 [2024-11-10 05:30:00.861655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.173 ms 00:28:07.783 [2024-11-10 05:30:00.861666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.783 [2024-11-10 05:30:00.864414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.783 [2024-11-10 05:30:00.864451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:07.783 [2024-11-10 05:30:00.864463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.721 ms 00:28:07.783 [2024-11-10 05:30:00.864480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.783 [2024-11-10 05:30:00.864604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.783 [2024-11-10 05:30:00.864618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:07.783 [2024-11-10 05:30:00.864628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:28:07.783 [2024-11-10 05:30:00.864638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.783 [2024-11-10 05:30:00.875643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:07.783 [2024-11-10 05:30:00.875697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:07.783 [2024-11-10 05:30:00.875709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:07.783 [2024-11-10 05:30:00.875720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.783 [2024-11-10 05:30:00.875799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:07.783 [2024-11-10 05:30:00.875813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:07.783 [2024-11-10 05:30:00.875822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:07.783 [2024-11-10 05:30:00.875834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.783 [2024-11-10 05:30:00.875919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:07.783 [2024-11-10 05:30:00.875939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:07.783 [2024-11-10 05:30:00.875948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:07.783 [2024-11-10 05:30:00.875980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.783 [2024-11-10 05:30:00.876059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:07.783 [2024-11-10 05:30:00.876077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:07.783 [2024-11-10 05:30:00.876087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:07.783 [2024-11-10 05:30:00.876105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.783 [2024-11-10 05:30:00.895524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:07.783 [2024-11-10 05:30:00.895589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:07.783 [2024-11-10 05:30:00.895604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:07.783 [2024-11-10 05:30:00.895616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.783 [2024-11-10 05:30:00.911367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:07.783 [2024-11-10 05:30:00.911435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:07.783 [2024-11-10 05:30:00.911448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:07.783 [2024-11-10 05:30:00.911469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.783 [2024-11-10 05:30:00.911570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:07.783 [2024-11-10 05:30:00.911589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:07.783 [2024-11-10 05:30:00.911598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:07.783 [2024-11-10 05:30:00.911610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.783 [2024-11-10 05:30:00.911705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:07.783 [2024-11-10 05:30:00.911719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:07.783 [2024-11-10 05:30:00.911733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:07.783 [2024-11-10 05:30:00.911744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.783 [2024-11-10 05:30:00.911831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:07.783 [2024-11-10 05:30:00.911860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:07.783 [2024-11-10 05:30:00.911869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:07.783 [2024-11-10 05:30:00.911879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.783 [2024-11-10 05:30:00.911925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:07.783 [2024-11-10 05:30:00.911938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:07.783 [2024-11-10 05:30:00.911946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:07.783 [2024-11-10 05:30:00.911976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.783 [2024-11-10 05:30:00.912051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:07.783 [2024-11-10 05:30:00.912069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:07.783 [2024-11-10 05:30:00.912079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:07.783 [2024-11-10 05:30:00.912091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.783 [2024-11-10 05:30:00.912153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:07.783 [2024-11-10 05:30:00.912168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:07.783 [2024-11-10 05:30:00.912182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:07.783 [2024-11-10 05:30:00.912194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.783 [2024-11-10 05:30:00.912376] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 85.661 ms, result 0 00:28:07.783 true 00:28:07.783 05:30:00 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 92829 00:28:07.783 05:30:00 ftl.ftl_restore_fast -- common/autotest_common.sh@950 -- # '[' -z 92829 ']' 00:28:07.783 05:30:00 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # kill -0 92829 00:28:07.783 05:30:00 ftl.ftl_restore_fast -- common/autotest_common.sh@955 -- # uname 00:28:07.783 05:30:00 ftl.ftl_restore_fast -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:07.783 05:30:00 ftl.ftl_restore_fast -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92829 00:28:07.783 05:30:00 ftl.ftl_restore_fast -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:07.783 killing process with pid 92829 00:28:07.783 05:30:00 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:07.783 05:30:00 ftl.ftl_restore_fast -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92829' 00:28:07.783 05:30:00 ftl.ftl_restore_fast -- common/autotest_common.sh@969 -- # kill 92829 00:28:07.783 05:30:00 ftl.ftl_restore_fast -- common/autotest_common.sh@974 -- # wait 92829 00:28:13.087 05:30:06 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:28:17.294 262144+0 records in 00:28:17.294 262144+0 records out 00:28:17.294 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.07647 s, 263 MB/s 00:28:17.294 05:30:10 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:28:18.680 05:30:11 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:18.680 [2024-11-10 05:30:11.830896] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:28:18.680 [2024-11-10 05:30:11.830987] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93037 ] 00:28:18.941 [2024-11-10 05:30:11.972782] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:18.941 [2024-11-10 05:30:12.017235] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:18.941 [2024-11-10 05:30:12.120472] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:18.941 [2024-11-10 05:30:12.120538] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:19.202 [2024-11-10 05:30:12.279045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.202 [2024-11-10 05:30:12.279085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:19.202 [2024-11-10 05:30:12.279101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:19.202 [2024-11-10 05:30:12.279110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.202 [2024-11-10 05:30:12.279160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.202 [2024-11-10 05:30:12.279170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:19.202 [2024-11-10 05:30:12.279178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:28:19.202 [2024-11-10 05:30:12.279192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.202 [2024-11-10 05:30:12.279212] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:19.202 [2024-11-10 05:30:12.279526] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:19.202 [2024-11-10 05:30:12.279556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.202 [2024-11-10 05:30:12.279564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:19.202 [2024-11-10 05:30:12.279580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.350 ms 00:28:19.202 [2024-11-10 05:30:12.279590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.202 [2024-11-10 05:30:12.280978] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:19.202 [2024-11-10 05:30:12.284094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.202 [2024-11-10 05:30:12.284129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:19.202 [2024-11-10 05:30:12.284139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.118 ms 00:28:19.202 [2024-11-10 05:30:12.284147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.202 [2024-11-10 05:30:12.284203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.202 [2024-11-10 05:30:12.284212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:19.202 [2024-11-10 05:30:12.284221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:28:19.202 [2024-11-10 05:30:12.284230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.202 [2024-11-10 05:30:12.290862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.202 [2024-11-10 05:30:12.290890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:19.202 [2024-11-10 05:30:12.290904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.583 ms 00:28:19.202 [2024-11-10 05:30:12.290914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.202 [2024-11-10 05:30:12.291006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.202 [2024-11-10 05:30:12.291016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:19.202 [2024-11-10 05:30:12.291024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:28:19.202 [2024-11-10 05:30:12.291032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.202 [2024-11-10 05:30:12.291075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.202 [2024-11-10 05:30:12.291087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:19.202 [2024-11-10 05:30:12.291097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:28:19.202 [2024-11-10 05:30:12.291104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.202 [2024-11-10 05:30:12.291130] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:19.202 [2024-11-10 05:30:12.292832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.202 [2024-11-10 05:30:12.292873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:19.202 [2024-11-10 05:30:12.292882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.712 ms 00:28:19.202 [2024-11-10 05:30:12.292893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.202 [2024-11-10 05:30:12.292945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.202 [2024-11-10 05:30:12.292953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:19.202 [2024-11-10 05:30:12.292966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:28:19.202 [2024-11-10 05:30:12.292973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.202 [2024-11-10 05:30:12.293053] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:19.202 [2024-11-10 05:30:12.293084] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:19.202 [2024-11-10 05:30:12.293132] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:19.202 [2024-11-10 05:30:12.293154] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:19.202 [2024-11-10 05:30:12.293275] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:19.202 [2024-11-10 05:30:12.293286] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:19.202 [2024-11-10 05:30:12.293309] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:19.202 [2024-11-10 05:30:12.293321] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:19.202 [2024-11-10 05:30:12.293334] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:19.202 [2024-11-10 05:30:12.293349] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:19.202 [2024-11-10 05:30:12.293357] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:19.202 [2024-11-10 05:30:12.293367] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:19.202 [2024-11-10 05:30:12.293381] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:19.202 [2024-11-10 05:30:12.293389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.202 [2024-11-10 05:30:12.293396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:19.202 [2024-11-10 05:30:12.293410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.338 ms 00:28:19.202 [2024-11-10 05:30:12.293417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.202 [2024-11-10 05:30:12.293523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.202 [2024-11-10 05:30:12.293535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:19.202 [2024-11-10 05:30:12.293550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:28:19.202 [2024-11-10 05:30:12.293558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.202 [2024-11-10 05:30:12.293660] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:19.202 [2024-11-10 05:30:12.293671] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:19.202 [2024-11-10 05:30:12.293680] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:19.202 [2024-11-10 05:30:12.293698] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:19.202 [2024-11-10 05:30:12.293707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:19.202 [2024-11-10 05:30:12.293721] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:19.202 [2024-11-10 05:30:12.293729] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:19.203 [2024-11-10 05:30:12.293742] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:19.203 [2024-11-10 05:30:12.293750] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:19.203 [2024-11-10 05:30:12.293763] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:19.203 [2024-11-10 05:30:12.293771] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:19.203 [2024-11-10 05:30:12.293779] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:19.203 [2024-11-10 05:30:12.293807] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:19.203 [2024-11-10 05:30:12.293815] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:19.203 [2024-11-10 05:30:12.293827] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:19.203 [2024-11-10 05:30:12.293839] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:19.203 [2024-11-10 05:30:12.293854] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:19.203 [2024-11-10 05:30:12.293861] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:19.203 [2024-11-10 05:30:12.293873] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:19.203 [2024-11-10 05:30:12.293881] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:19.203 [2024-11-10 05:30:12.293892] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:19.203 [2024-11-10 05:30:12.293899] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:19.203 [2024-11-10 05:30:12.293905] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:19.203 [2024-11-10 05:30:12.293913] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:19.203 [2024-11-10 05:30:12.293921] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:19.203 [2024-11-10 05:30:12.293927] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:19.203 [2024-11-10 05:30:12.293933] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:19.203 [2024-11-10 05:30:12.293940] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:19.203 [2024-11-10 05:30:12.293953] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:19.203 [2024-11-10 05:30:12.293959] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:19.203 [2024-11-10 05:30:12.293965] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:19.203 [2024-11-10 05:30:12.293973] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:19.203 [2024-11-10 05:30:12.293979] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:19.203 [2024-11-10 05:30:12.293986] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:19.203 [2024-11-10 05:30:12.294005] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:19.203 [2024-11-10 05:30:12.294012] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:19.203 [2024-11-10 05:30:12.294018] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:19.203 [2024-11-10 05:30:12.294025] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:19.203 [2024-11-10 05:30:12.294031] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:19.203 [2024-11-10 05:30:12.294038] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:19.203 [2024-11-10 05:30:12.294044] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:19.203 [2024-11-10 05:30:12.294051] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:19.203 [2024-11-10 05:30:12.294057] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:19.203 [2024-11-10 05:30:12.294064] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:19.203 [2024-11-10 05:30:12.294076] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:19.203 [2024-11-10 05:30:12.294083] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:19.203 [2024-11-10 05:30:12.294094] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:19.203 [2024-11-10 05:30:12.294104] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:19.203 [2024-11-10 05:30:12.294111] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:19.203 [2024-11-10 05:30:12.294118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:19.203 [2024-11-10 05:30:12.294125] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:19.203 [2024-11-10 05:30:12.294132] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:19.203 [2024-11-10 05:30:12.294139] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:19.203 [2024-11-10 05:30:12.294147] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:19.203 [2024-11-10 05:30:12.294157] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:19.203 [2024-11-10 05:30:12.294169] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:19.203 [2024-11-10 05:30:12.294183] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:19.203 [2024-11-10 05:30:12.294193] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:19.203 [2024-11-10 05:30:12.294201] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:19.203 [2024-11-10 05:30:12.294212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:19.203 [2024-11-10 05:30:12.294222] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:19.203 [2024-11-10 05:30:12.294233] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:19.203 [2024-11-10 05:30:12.294240] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:19.203 [2024-11-10 05:30:12.294249] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:19.203 [2024-11-10 05:30:12.294261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:19.203 [2024-11-10 05:30:12.294273] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:19.203 [2024-11-10 05:30:12.294280] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:19.203 [2024-11-10 05:30:12.294291] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:19.203 [2024-11-10 05:30:12.294303] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:19.203 [2024-11-10 05:30:12.294314] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:19.203 [2024-11-10 05:30:12.294326] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:19.203 [2024-11-10 05:30:12.294334] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:19.203 [2024-11-10 05:30:12.294342] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:19.203 [2024-11-10 05:30:12.294350] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:19.203 [2024-11-10 05:30:12.294358] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:19.203 [2024-11-10 05:30:12.294365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.203 [2024-11-10 05:30:12.294376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:19.203 [2024-11-10 05:30:12.294388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.776 ms 00:28:19.203 [2024-11-10 05:30:12.294399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.203 [2024-11-10 05:30:12.314690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.203 [2024-11-10 05:30:12.314747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:19.203 [2024-11-10 05:30:12.314782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.240 ms 00:28:19.203 [2024-11-10 05:30:12.314796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.203 [2024-11-10 05:30:12.314938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.203 [2024-11-10 05:30:12.314953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:19.203 [2024-11-10 05:30:12.314966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:28:19.203 [2024-11-10 05:30:12.314978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.203 [2024-11-10 05:30:12.326767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.203 [2024-11-10 05:30:12.326801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:19.203 [2024-11-10 05:30:12.326811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.692 ms 00:28:19.203 [2024-11-10 05:30:12.326818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.203 [2024-11-10 05:30:12.326850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.203 [2024-11-10 05:30:12.326859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:19.203 [2024-11-10 05:30:12.326867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:19.203 [2024-11-10 05:30:12.326874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.203 [2024-11-10 05:30:12.327347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.203 [2024-11-10 05:30:12.327376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:19.203 [2024-11-10 05:30:12.327385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.430 ms 00:28:19.203 [2024-11-10 05:30:12.327393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.203 [2024-11-10 05:30:12.327527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.203 [2024-11-10 05:30:12.327539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:19.203 [2024-11-10 05:30:12.327548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:28:19.203 [2024-11-10 05:30:12.327556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.203 [2024-11-10 05:30:12.333750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.203 [2024-11-10 05:30:12.333786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:19.203 [2024-11-10 05:30:12.333798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.171 ms 00:28:19.203 [2024-11-10 05:30:12.333806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.204 [2024-11-10 05:30:12.337086] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:28:19.204 [2024-11-10 05:30:12.337125] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:19.204 [2024-11-10 05:30:12.337140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.204 [2024-11-10 05:30:12.337149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:19.204 [2024-11-10 05:30:12.337157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.250 ms 00:28:19.204 [2024-11-10 05:30:12.337165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.204 [2024-11-10 05:30:12.352172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.204 [2024-11-10 05:30:12.352209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:19.204 [2024-11-10 05:30:12.352220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.969 ms 00:28:19.204 [2024-11-10 05:30:12.352231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.204 [2024-11-10 05:30:12.354479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.204 [2024-11-10 05:30:12.354509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:19.204 [2024-11-10 05:30:12.354518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.204 ms 00:28:19.204 [2024-11-10 05:30:12.354526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.204 [2024-11-10 05:30:12.356619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.204 [2024-11-10 05:30:12.356648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:19.204 [2024-11-10 05:30:12.356657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.062 ms 00:28:19.204 [2024-11-10 05:30:12.356665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.204 [2024-11-10 05:30:12.357011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.204 [2024-11-10 05:30:12.357031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:19.204 [2024-11-10 05:30:12.357041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:28:19.204 [2024-11-10 05:30:12.357049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.204 [2024-11-10 05:30:12.377409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.204 [2024-11-10 05:30:12.377447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:19.204 [2024-11-10 05:30:12.377469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.344 ms 00:28:19.204 [2024-11-10 05:30:12.377478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.204 [2024-11-10 05:30:12.385371] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:19.204 [2024-11-10 05:30:12.388115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.204 [2024-11-10 05:30:12.388144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:19.204 [2024-11-10 05:30:12.388155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.595 ms 00:28:19.204 [2024-11-10 05:30:12.388171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.204 [2024-11-10 05:30:12.388260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.204 [2024-11-10 05:30:12.388272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:19.204 [2024-11-10 05:30:12.388280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:28:19.204 [2024-11-10 05:30:12.388289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.204 [2024-11-10 05:30:12.388357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.204 [2024-11-10 05:30:12.388368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:19.204 [2024-11-10 05:30:12.388376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:28:19.204 [2024-11-10 05:30:12.388383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.204 [2024-11-10 05:30:12.388406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.204 [2024-11-10 05:30:12.388414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:19.204 [2024-11-10 05:30:12.388422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:19.204 [2024-11-10 05:30:12.388430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.204 [2024-11-10 05:30:12.388462] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:19.204 [2024-11-10 05:30:12.388474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.204 [2024-11-10 05:30:12.388482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:19.204 [2024-11-10 05:30:12.388494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:28:19.204 [2024-11-10 05:30:12.388501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.204 [2024-11-10 05:30:12.392571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.204 [2024-11-10 05:30:12.392614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:19.204 [2024-11-10 05:30:12.392624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.049 ms 00:28:19.204 [2024-11-10 05:30:12.392631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.204 [2024-11-10 05:30:12.392701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.204 [2024-11-10 05:30:12.392711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:19.204 [2024-11-10 05:30:12.392720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:28:19.204 [2024-11-10 05:30:12.392728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.204 [2024-11-10 05:30:12.393751] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 114.269 ms, result 0 00:28:20.586  [2024-11-10T05:30:14.765Z] Copying: 18/1024 [MB] (18 MBps) [2024-11-10T05:30:15.707Z] Copying: 35/1024 [MB] (17 MBps) [2024-11-10T05:30:16.651Z] Copying: 51/1024 [MB] (16 MBps) [2024-11-10T05:30:17.593Z] Copying: 68/1024 [MB] (16 MBps) [2024-11-10T05:30:18.536Z] Copying: 88/1024 [MB] (19 MBps) [2024-11-10T05:30:19.479Z] Copying: 104/1024 [MB] (16 MBps) [2024-11-10T05:30:20.426Z] Copying: 124/1024 [MB] (19 MBps) [2024-11-10T05:30:21.813Z] Copying: 151/1024 [MB] (26 MBps) [2024-11-10T05:30:22.431Z] Copying: 171/1024 [MB] (20 MBps) [2024-11-10T05:30:23.819Z] Copying: 188/1024 [MB] (17 MBps) [2024-11-10T05:30:24.760Z] Copying: 210/1024 [MB] (21 MBps) [2024-11-10T05:30:25.701Z] Copying: 247/1024 [MB] (37 MBps) [2024-11-10T05:30:26.643Z] Copying: 291/1024 [MB] (43 MBps) [2024-11-10T05:30:27.586Z] Copying: 334/1024 [MB] (43 MBps) [2024-11-10T05:30:28.528Z] Copying: 374/1024 [MB] (40 MBps) [2024-11-10T05:30:29.473Z] Copying: 388/1024 [MB] (13 MBps) [2024-11-10T05:30:30.419Z] Copying: 399/1024 [MB] (11 MBps) [2024-11-10T05:30:31.805Z] Copying: 413/1024 [MB] (14 MBps) [2024-11-10T05:30:32.746Z] Copying: 424/1024 [MB] (10 MBps) [2024-11-10T05:30:33.689Z] Copying: 435/1024 [MB] (10 MBps) [2024-11-10T05:30:34.630Z] Copying: 462/1024 [MB] (27 MBps) [2024-11-10T05:30:35.573Z] Copying: 484/1024 [MB] (22 MBps) [2024-11-10T05:30:36.525Z] Copying: 507/1024 [MB] (22 MBps) [2024-11-10T05:30:37.469Z] Copying: 524/1024 [MB] (17 MBps) [2024-11-10T05:30:38.412Z] Copying: 539/1024 [MB] (14 MBps) [2024-11-10T05:30:39.797Z] Copying: 556/1024 [MB] (16 MBps) [2024-11-10T05:30:40.738Z] Copying: 570/1024 [MB] (13 MBps) [2024-11-10T05:30:41.683Z] Copying: 582/1024 [MB] (12 MBps) [2024-11-10T05:30:42.627Z] Copying: 592/1024 [MB] (10 MBps) [2024-11-10T05:30:43.568Z] Copying: 603/1024 [MB] (10 MBps) [2024-11-10T05:30:44.510Z] Copying: 621/1024 [MB] (18 MBps) [2024-11-10T05:30:45.454Z] Copying: 666/1024 [MB] (44 MBps) [2024-11-10T05:30:46.839Z] Copying: 691/1024 [MB] (24 MBps) [2024-11-10T05:30:47.412Z] Copying: 705/1024 [MB] (14 MBps) [2024-11-10T05:30:48.796Z] Copying: 724/1024 [MB] (18 MBps) [2024-11-10T05:30:49.738Z] Copying: 736/1024 [MB] (12 MBps) [2024-11-10T05:30:50.680Z] Copying: 751/1024 [MB] (15 MBps) [2024-11-10T05:30:51.627Z] Copying: 764/1024 [MB] (12 MBps) [2024-11-10T05:30:52.570Z] Copying: 779/1024 [MB] (14 MBps) [2024-11-10T05:30:53.513Z] Copying: 789/1024 [MB] (10 MBps) [2024-11-10T05:30:54.456Z] Copying: 800/1024 [MB] (10 MBps) [2024-11-10T05:30:55.844Z] Copying: 810/1024 [MB] (10 MBps) [2024-11-10T05:30:56.415Z] Copying: 823/1024 [MB] (13 MBps) [2024-11-10T05:30:57.802Z] Copying: 843/1024 [MB] (20 MBps) [2024-11-10T05:30:58.744Z] Copying: 858/1024 [MB] (15 MBps) [2024-11-10T05:30:59.688Z] Copying: 869/1024 [MB] (10 MBps) [2024-11-10T05:31:00.629Z] Copying: 892/1024 [MB] (23 MBps) [2024-11-10T05:31:01.572Z] Copying: 907/1024 [MB] (15 MBps) [2024-11-10T05:31:02.516Z] Copying: 921/1024 [MB] (13 MBps) [2024-11-10T05:31:03.459Z] Copying: 948/1024 [MB] (27 MBps) [2024-11-10T05:31:04.404Z] Copying: 994/1024 [MB] (46 MBps) [2024-11-10T05:31:04.404Z] Copying: 1024/1024 [MB] (average 19 MBps)[2024-11-10 05:31:04.057859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.168 [2024-11-10 05:31:04.057896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:11.168 [2024-11-10 05:31:04.057911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:29:11.168 [2024-11-10 05:31:04.057917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.168 [2024-11-10 05:31:04.057933] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:11.168 [2024-11-10 05:31:04.058353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.168 [2024-11-10 05:31:04.058368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:11.168 [2024-11-10 05:31:04.058375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.407 ms 00:29:11.168 [2024-11-10 05:31:04.058381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.168 [2024-11-10 05:31:04.059687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.168 [2024-11-10 05:31:04.059710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:11.168 [2024-11-10 05:31:04.059718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.292 ms 00:29:11.168 [2024-11-10 05:31:04.059724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.168 [2024-11-10 05:31:04.059743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.168 [2024-11-10 05:31:04.059753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:29:11.168 [2024-11-10 05:31:04.059760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:29:11.168 [2024-11-10 05:31:04.059765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.168 [2024-11-10 05:31:04.059802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.169 [2024-11-10 05:31:04.059808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:29:11.169 [2024-11-10 05:31:04.059815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:29:11.169 [2024-11-10 05:31:04.059820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.169 [2024-11-10 05:31:04.059839] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:11.169 [2024-11-10 05:31:04.059849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:29:11.169 [2024-11-10 05:31:04.059857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:11.169 [2024-11-10 05:31:04.059863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:11.169 [2024-11-10 05:31:04.059869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:11.169 [2024-11-10 05:31:04.059874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:11.169 [2024-11-10 05:31:04.059880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:11.169 [2024-11-10 05:31:04.059886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:11.169 [2024-11-10 05:31:04.059891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:11.169 [2024-11-10 05:31:04.059897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:11.169 [2024-11-10 05:31:04.059903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:11.169 [2024-11-10 05:31:04.059908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:11.169 [2024-11-10 05:31:04.059914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:11.169 [2024-11-10 05:31:04.059920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:11.169 [2024-11-10 05:31:04.059926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:11.169 [2024-11-10 05:31:04.059932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:11.169 [2024-11-10 05:31:04.059937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:11.169 [2024-11-10 05:31:04.059948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:11.169 [2024-11-10 05:31:04.059954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:11.169 [2024-11-10 05:31:04.059960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:11.169 [2024-11-10 05:31:04.059975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:11.169 [2024-11-10 05:31:04.059981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:11.169 [2024-11-10 05:31:04.059987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:11.169 [2024-11-10 05:31:04.060003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:11.169 [2024-11-10 05:31:04.060009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:11.169 [2024-11-10 05:31:04.060015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:11.169 [2024-11-10 05:31:04.060020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:11.169 [2024-11-10 05:31:04.060026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:11.169 [2024-11-10 05:31:04.060031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:11.169 [2024-11-10 05:31:04.060037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:11.169 [2024-11-10 05:31:04.060043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:11.169 [2024-11-10 05:31:04.060048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:11.169 [2024-11-10 05:31:04.060054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:11.169 [2024-11-10 05:31:04.060060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:11.169 [2024-11-10 05:31:04.060065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:11.169 [2024-11-10 05:31:04.060071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:11.169 [2024-11-10 05:31:04.060076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:11.169 [2024-11-10 05:31:04.060082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:11.169 [2024-11-10 05:31:04.060087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:11.169 [2024-11-10 05:31:04.060093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:11.169 [2024-11-10 05:31:04.060098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:11.169 [2024-11-10 05:31:04.060104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:11.169 [2024-11-10 05:31:04.060110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:11.169 [2024-11-10 05:31:04.060115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:11.169 [2024-11-10 05:31:04.060121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:11.169 [2024-11-10 05:31:04.060127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:11.169 [2024-11-10 05:31:04.060133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:11.169 [2024-11-10 05:31:04.060139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:11.169 [2024-11-10 05:31:04.060145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:11.169 [2024-11-10 05:31:04.060150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:11.169 [2024-11-10 05:31:04.060156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:11.169 [2024-11-10 05:31:04.060161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:11.169 [2024-11-10 05:31:04.060167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:11.169 [2024-11-10 05:31:04.060172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:11.169 [2024-11-10 05:31:04.060178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:11.169 [2024-11-10 05:31:04.060184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:11.169 [2024-11-10 05:31:04.060189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:11.169 [2024-11-10 05:31:04.060195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:11.169 [2024-11-10 05:31:04.060201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:11.169 [2024-11-10 05:31:04.060206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:11.169 [2024-11-10 05:31:04.060212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:11.169 [2024-11-10 05:31:04.060218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:11.169 [2024-11-10 05:31:04.060223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:11.169 [2024-11-10 05:31:04.060228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:11.169 [2024-11-10 05:31:04.060234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:11.169 [2024-11-10 05:31:04.060239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:11.169 [2024-11-10 05:31:04.060245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:11.169 [2024-11-10 05:31:04.060250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:11.169 [2024-11-10 05:31:04.060256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:11.169 [2024-11-10 05:31:04.060261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:11.169 [2024-11-10 05:31:04.060267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:11.169 [2024-11-10 05:31:04.060272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:11.169 [2024-11-10 05:31:04.060278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:11.169 [2024-11-10 05:31:04.060283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:11.169 [2024-11-10 05:31:04.060288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:11.169 [2024-11-10 05:31:04.060294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:11.169 [2024-11-10 05:31:04.060299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:11.169 [2024-11-10 05:31:04.060305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:11.169 [2024-11-10 05:31:04.060311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:11.169 [2024-11-10 05:31:04.060317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:11.169 [2024-11-10 05:31:04.060323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:11.169 [2024-11-10 05:31:04.060328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:11.169 [2024-11-10 05:31:04.060334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:11.169 [2024-11-10 05:31:04.060339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:11.169 [2024-11-10 05:31:04.060345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:11.169 [2024-11-10 05:31:04.060351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:11.170 [2024-11-10 05:31:04.060356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:11.170 [2024-11-10 05:31:04.060362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:11.170 [2024-11-10 05:31:04.060367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:11.170 [2024-11-10 05:31:04.060373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:11.170 [2024-11-10 05:31:04.060378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:11.170 [2024-11-10 05:31:04.060384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:11.170 [2024-11-10 05:31:04.060389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:11.170 [2024-11-10 05:31:04.060394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:11.170 [2024-11-10 05:31:04.060400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:11.170 [2024-11-10 05:31:04.060405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:11.170 [2024-11-10 05:31:04.060411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:11.170 [2024-11-10 05:31:04.060416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:11.170 [2024-11-10 05:31:04.060421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:11.170 [2024-11-10 05:31:04.060427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:11.170 [2024-11-10 05:31:04.060432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:11.170 [2024-11-10 05:31:04.060444] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:11.170 [2024-11-10 05:31:04.060452] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5d7d5206-51f2-499a-8e45-dc97d339ed69 00:29:11.170 [2024-11-10 05:31:04.060458] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:29:11.170 [2024-11-10 05:31:04.060468] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:29:11.170 [2024-11-10 05:31:04.060474] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:11.170 [2024-11-10 05:31:04.060480] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:11.170 [2024-11-10 05:31:04.060485] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:11.170 [2024-11-10 05:31:04.060491] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:11.170 [2024-11-10 05:31:04.060500] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:11.170 [2024-11-10 05:31:04.060505] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:11.170 [2024-11-10 05:31:04.060510] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:11.170 [2024-11-10 05:31:04.060516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.170 [2024-11-10 05:31:04.060522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:11.170 [2024-11-10 05:31:04.060528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.678 ms 00:29:11.170 [2024-11-10 05:31:04.060534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.170 [2024-11-10 05:31:04.061771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.170 [2024-11-10 05:31:04.061794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:11.170 [2024-11-10 05:31:04.061801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.226 ms 00:29:11.170 [2024-11-10 05:31:04.061807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.170 [2024-11-10 05:31:04.061873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.170 [2024-11-10 05:31:04.061883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:11.170 [2024-11-10 05:31:04.061890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:29:11.170 [2024-11-10 05:31:04.061898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.170 [2024-11-10 05:31:04.065631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:11.170 [2024-11-10 05:31:04.065655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:11.170 [2024-11-10 05:31:04.065661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:11.170 [2024-11-10 05:31:04.065667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.170 [2024-11-10 05:31:04.065707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:11.170 [2024-11-10 05:31:04.065714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:11.170 [2024-11-10 05:31:04.065720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:11.170 [2024-11-10 05:31:04.065732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.170 [2024-11-10 05:31:04.065756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:11.170 [2024-11-10 05:31:04.065762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:11.170 [2024-11-10 05:31:04.065768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:11.170 [2024-11-10 05:31:04.065774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.170 [2024-11-10 05:31:04.065785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:11.170 [2024-11-10 05:31:04.065791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:11.170 [2024-11-10 05:31:04.065800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:11.170 [2024-11-10 05:31:04.065805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.170 [2024-11-10 05:31:04.073337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:11.170 [2024-11-10 05:31:04.073367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:11.170 [2024-11-10 05:31:04.073374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:11.170 [2024-11-10 05:31:04.073380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.170 [2024-11-10 05:31:04.079358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:11.170 [2024-11-10 05:31:04.079397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:11.170 [2024-11-10 05:31:04.079405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:11.170 [2024-11-10 05:31:04.079413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.170 [2024-11-10 05:31:04.079450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:11.170 [2024-11-10 05:31:04.079457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:11.170 [2024-11-10 05:31:04.079463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:11.170 [2024-11-10 05:31:04.079473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.170 [2024-11-10 05:31:04.079492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:11.170 [2024-11-10 05:31:04.079498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:11.170 [2024-11-10 05:31:04.079503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:11.170 [2024-11-10 05:31:04.079509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.170 [2024-11-10 05:31:04.079550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:11.170 [2024-11-10 05:31:04.079557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:11.170 [2024-11-10 05:31:04.079563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:11.170 [2024-11-10 05:31:04.079569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.170 [2024-11-10 05:31:04.079586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:11.170 [2024-11-10 05:31:04.079592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:11.170 [2024-11-10 05:31:04.079598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:11.170 [2024-11-10 05:31:04.079604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.170 [2024-11-10 05:31:04.079632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:11.170 [2024-11-10 05:31:04.079641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:11.170 [2024-11-10 05:31:04.079647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:11.170 [2024-11-10 05:31:04.079652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.170 [2024-11-10 05:31:04.079684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:11.170 [2024-11-10 05:31:04.079691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:11.170 [2024-11-10 05:31:04.079698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:11.170 [2024-11-10 05:31:04.079703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.170 [2024-11-10 05:31:04.079800] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 21.914 ms, result 0 00:29:11.745 00:29:11.745 00:29:11.745 05:31:04 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:29:11.745 [2024-11-10 05:31:04.824563] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:29:11.745 [2024-11-10 05:31:04.825130] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93575 ] 00:29:11.745 [2024-11-10 05:31:04.970580] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:12.011 [2024-11-10 05:31:05.007706] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:12.011 [2024-11-10 05:31:05.090651] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:12.011 [2024-11-10 05:31:05.090842] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:12.273 [2024-11-10 05:31:05.247205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.274 [2024-11-10 05:31:05.247340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:12.274 [2024-11-10 05:31:05.247362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:12.274 [2024-11-10 05:31:05.247374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.274 [2024-11-10 05:31:05.247425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.274 [2024-11-10 05:31:05.247435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:12.274 [2024-11-10 05:31:05.247447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:29:12.274 [2024-11-10 05:31:05.247454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.274 [2024-11-10 05:31:05.247474] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:12.274 [2024-11-10 05:31:05.247793] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:12.274 [2024-11-10 05:31:05.247827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.274 [2024-11-10 05:31:05.247839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:12.274 [2024-11-10 05:31:05.247850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.358 ms 00:29:12.274 [2024-11-10 05:31:05.247860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.274 [2024-11-10 05:31:05.248130] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:29:12.274 [2024-11-10 05:31:05.248154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.274 [2024-11-10 05:31:05.248161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:12.274 [2024-11-10 05:31:05.248170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:29:12.274 [2024-11-10 05:31:05.248177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.274 [2024-11-10 05:31:05.248258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.274 [2024-11-10 05:31:05.248270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:12.274 [2024-11-10 05:31:05.248279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:29:12.274 [2024-11-10 05:31:05.248289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.274 [2024-11-10 05:31:05.248520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.274 [2024-11-10 05:31:05.248530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:12.274 [2024-11-10 05:31:05.248538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.193 ms 00:29:12.274 [2024-11-10 05:31:05.248545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.274 [2024-11-10 05:31:05.248618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.274 [2024-11-10 05:31:05.248629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:12.274 [2024-11-10 05:31:05.248637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:29:12.274 [2024-11-10 05:31:05.248644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.274 [2024-11-10 05:31:05.248664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.274 [2024-11-10 05:31:05.248672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:12.274 [2024-11-10 05:31:05.248683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:12.274 [2024-11-10 05:31:05.248690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.274 [2024-11-10 05:31:05.248709] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:12.274 [2024-11-10 05:31:05.250263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.274 [2024-11-10 05:31:05.250290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:12.274 [2024-11-10 05:31:05.250301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.557 ms 00:29:12.274 [2024-11-10 05:31:05.250308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.274 [2024-11-10 05:31:05.250336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.274 [2024-11-10 05:31:05.250344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:12.274 [2024-11-10 05:31:05.250351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:29:12.274 [2024-11-10 05:31:05.250358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.274 [2024-11-10 05:31:05.250376] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:12.274 [2024-11-10 05:31:05.250394] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:12.274 [2024-11-10 05:31:05.250429] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:12.274 [2024-11-10 05:31:05.250443] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:12.274 [2024-11-10 05:31:05.250555] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:12.274 [2024-11-10 05:31:05.250565] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:12.274 [2024-11-10 05:31:05.250575] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:12.274 [2024-11-10 05:31:05.250584] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:12.274 [2024-11-10 05:31:05.250593] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:12.274 [2024-11-10 05:31:05.250600] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:12.274 [2024-11-10 05:31:05.250612] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:12.274 [2024-11-10 05:31:05.250619] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:12.274 [2024-11-10 05:31:05.250626] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:12.274 [2024-11-10 05:31:05.250633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.274 [2024-11-10 05:31:05.250640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:12.274 [2024-11-10 05:31:05.250647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.258 ms 00:29:12.274 [2024-11-10 05:31:05.250657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.274 [2024-11-10 05:31:05.250738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.274 [2024-11-10 05:31:05.250746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:12.274 [2024-11-10 05:31:05.250754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:29:12.274 [2024-11-10 05:31:05.250762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.274 [2024-11-10 05:31:05.250870] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:12.274 [2024-11-10 05:31:05.250887] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:12.274 [2024-11-10 05:31:05.250895] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:12.274 [2024-11-10 05:31:05.250902] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:12.274 [2024-11-10 05:31:05.250913] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:12.274 [2024-11-10 05:31:05.250926] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:12.274 [2024-11-10 05:31:05.250934] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:12.274 [2024-11-10 05:31:05.250942] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:12.274 [2024-11-10 05:31:05.250950] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:12.274 [2024-11-10 05:31:05.250957] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:12.274 [2024-11-10 05:31:05.250964] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:12.274 [2024-11-10 05:31:05.250972] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:12.274 [2024-11-10 05:31:05.250979] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:12.274 [2024-11-10 05:31:05.250986] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:12.274 [2024-11-10 05:31:05.251005] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:12.274 [2024-11-10 05:31:05.251013] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:12.274 [2024-11-10 05:31:05.251020] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:12.274 [2024-11-10 05:31:05.251028] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:12.274 [2024-11-10 05:31:05.251036] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:12.274 [2024-11-10 05:31:05.251043] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:12.274 [2024-11-10 05:31:05.251053] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:12.274 [2024-11-10 05:31:05.251061] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:12.274 [2024-11-10 05:31:05.251070] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:12.275 [2024-11-10 05:31:05.251078] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:12.275 [2024-11-10 05:31:05.251086] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:12.275 [2024-11-10 05:31:05.251095] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:12.275 [2024-11-10 05:31:05.251102] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:12.275 [2024-11-10 05:31:05.251110] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:12.275 [2024-11-10 05:31:05.251117] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:12.275 [2024-11-10 05:31:05.251125] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:12.275 [2024-11-10 05:31:05.251132] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:12.275 [2024-11-10 05:31:05.251140] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:12.275 [2024-11-10 05:31:05.251147] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:12.275 [2024-11-10 05:31:05.251154] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:12.275 [2024-11-10 05:31:05.251162] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:12.275 [2024-11-10 05:31:05.251170] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:12.275 [2024-11-10 05:31:05.251182] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:12.275 [2024-11-10 05:31:05.251190] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:12.275 [2024-11-10 05:31:05.251198] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:12.275 [2024-11-10 05:31:05.251205] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:12.275 [2024-11-10 05:31:05.251213] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:12.275 [2024-11-10 05:31:05.251220] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:12.275 [2024-11-10 05:31:05.251227] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:12.275 [2024-11-10 05:31:05.251234] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:12.275 [2024-11-10 05:31:05.251243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:12.275 [2024-11-10 05:31:05.251251] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:12.275 [2024-11-10 05:31:05.251259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:12.275 [2024-11-10 05:31:05.251270] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:12.275 [2024-11-10 05:31:05.251278] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:12.275 [2024-11-10 05:31:05.251285] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:12.275 [2024-11-10 05:31:05.251292] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:12.275 [2024-11-10 05:31:05.251300] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:12.275 [2024-11-10 05:31:05.251309] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:12.275 [2024-11-10 05:31:05.251318] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:12.275 [2024-11-10 05:31:05.251331] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:12.275 [2024-11-10 05:31:05.251341] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:12.275 [2024-11-10 05:31:05.251349] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:12.275 [2024-11-10 05:31:05.251357] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:12.275 [2024-11-10 05:31:05.251365] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:12.275 [2024-11-10 05:31:05.251373] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:12.275 [2024-11-10 05:31:05.251381] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:12.275 [2024-11-10 05:31:05.251389] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:12.275 [2024-11-10 05:31:05.251398] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:12.275 [2024-11-10 05:31:05.251405] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:12.275 [2024-11-10 05:31:05.251413] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:12.275 [2024-11-10 05:31:05.251421] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:12.275 [2024-11-10 05:31:05.251429] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:12.275 [2024-11-10 05:31:05.251437] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:12.275 [2024-11-10 05:31:05.251447] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:12.275 [2024-11-10 05:31:05.251454] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:12.275 [2024-11-10 05:31:05.251462] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:12.275 [2024-11-10 05:31:05.251473] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:12.275 [2024-11-10 05:31:05.251480] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:12.275 [2024-11-10 05:31:05.251487] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:12.275 [2024-11-10 05:31:05.251494] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:12.275 [2024-11-10 05:31:05.251501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.275 [2024-11-10 05:31:05.251508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:12.275 [2024-11-10 05:31:05.251516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.701 ms 00:29:12.275 [2024-11-10 05:31:05.251523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.275 [2024-11-10 05:31:05.266947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.275 [2024-11-10 05:31:05.266986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:12.275 [2024-11-10 05:31:05.267015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.361 ms 00:29:12.275 [2024-11-10 05:31:05.267023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.275 [2024-11-10 05:31:05.267108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.275 [2024-11-10 05:31:05.267117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:12.275 [2024-11-10 05:31:05.267125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:29:12.275 [2024-11-10 05:31:05.267132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.275 [2024-11-10 05:31:05.275358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.275 [2024-11-10 05:31:05.275395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:12.275 [2024-11-10 05:31:05.275410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.170 ms 00:29:12.275 [2024-11-10 05:31:05.275418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.275 [2024-11-10 05:31:05.275447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.275 [2024-11-10 05:31:05.275456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:12.275 [2024-11-10 05:31:05.275465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:12.275 [2024-11-10 05:31:05.275473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.275 [2024-11-10 05:31:05.275549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.275 [2024-11-10 05:31:05.275560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:12.275 [2024-11-10 05:31:05.275569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:29:12.275 [2024-11-10 05:31:05.275581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.275 [2024-11-10 05:31:05.275706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.275 [2024-11-10 05:31:05.275716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:12.275 [2024-11-10 05:31:05.275725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:29:12.275 [2024-11-10 05:31:05.275733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.275 [2024-11-10 05:31:05.280583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.275 [2024-11-10 05:31:05.280617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:12.275 [2024-11-10 05:31:05.280627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.825 ms 00:29:12.275 [2024-11-10 05:31:05.280642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.275 [2024-11-10 05:31:05.280757] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:12.275 [2024-11-10 05:31:05.280771] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:12.275 [2024-11-10 05:31:05.280782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.275 [2024-11-10 05:31:05.280796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:12.275 [2024-11-10 05:31:05.280805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:29:12.275 [2024-11-10 05:31:05.280813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.275 [2024-11-10 05:31:05.293647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.276 [2024-11-10 05:31:05.293685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:12.276 [2024-11-10 05:31:05.293695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.817 ms 00:29:12.276 [2024-11-10 05:31:05.293702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.276 [2024-11-10 05:31:05.293818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.276 [2024-11-10 05:31:05.293827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:12.276 [2024-11-10 05:31:05.293834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:29:12.276 [2024-11-10 05:31:05.293841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.276 [2024-11-10 05:31:05.293885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.276 [2024-11-10 05:31:05.293894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:12.276 [2024-11-10 05:31:05.293906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:29:12.276 [2024-11-10 05:31:05.293915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.276 [2024-11-10 05:31:05.294224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.276 [2024-11-10 05:31:05.294240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:12.276 [2024-11-10 05:31:05.294248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:29:12.276 [2024-11-10 05:31:05.294259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.276 [2024-11-10 05:31:05.294276] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:29:12.276 [2024-11-10 05:31:05.294286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.276 [2024-11-10 05:31:05.294293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:12.276 [2024-11-10 05:31:05.294301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:29:12.276 [2024-11-10 05:31:05.294310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.276 [2024-11-10 05:31:05.302283] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:12.276 [2024-11-10 05:31:05.302410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.276 [2024-11-10 05:31:05.302425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:12.276 [2024-11-10 05:31:05.302433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.084 ms 00:29:12.276 [2024-11-10 05:31:05.302444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.276 [2024-11-10 05:31:05.304805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.276 [2024-11-10 05:31:05.304830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:12.276 [2024-11-10 05:31:05.304839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.343 ms 00:29:12.276 [2024-11-10 05:31:05.304850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.276 [2024-11-10 05:31:05.304914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.276 [2024-11-10 05:31:05.304929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:12.276 [2024-11-10 05:31:05.304936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:29:12.276 [2024-11-10 05:31:05.304946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.276 [2024-11-10 05:31:05.304983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.276 [2024-11-10 05:31:05.305007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:12.276 [2024-11-10 05:31:05.305015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:12.276 [2024-11-10 05:31:05.305021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.276 [2024-11-10 05:31:05.305049] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:12.276 [2024-11-10 05:31:05.305061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.276 [2024-11-10 05:31:05.305070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:12.276 [2024-11-10 05:31:05.305078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:29:12.276 [2024-11-10 05:31:05.305085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.276 [2024-11-10 05:31:05.309460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.276 [2024-11-10 05:31:05.309496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:12.276 [2024-11-10 05:31:05.309510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.356 ms 00:29:12.276 [2024-11-10 05:31:05.309521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.276 [2024-11-10 05:31:05.309587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.276 [2024-11-10 05:31:05.309596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:12.276 [2024-11-10 05:31:05.309604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:29:12.276 [2024-11-10 05:31:05.309611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.276 [2024-11-10 05:31:05.310521] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 62.936 ms, result 0 00:29:13.661  [2024-11-10T05:31:07.841Z] Copying: 28/1024 [MB] (28 MBps) [2024-11-10T05:31:08.785Z] Copying: 44/1024 [MB] (15 MBps) [2024-11-10T05:31:09.729Z] Copying: 66/1024 [MB] (22 MBps) [2024-11-10T05:31:10.673Z] Copying: 84/1024 [MB] (18 MBps) [2024-11-10T05:31:11.616Z] Copying: 107/1024 [MB] (22 MBps) [2024-11-10T05:31:12.559Z] Copying: 130/1024 [MB] (22 MBps) [2024-11-10T05:31:13.502Z] Copying: 143/1024 [MB] (13 MBps) [2024-11-10T05:31:14.888Z] Copying: 154/1024 [MB] (10 MBps) [2024-11-10T05:31:15.832Z] Copying: 167/1024 [MB] (12 MBps) [2024-11-10T05:31:16.776Z] Copying: 181/1024 [MB] (14 MBps) [2024-11-10T05:31:17.718Z] Copying: 200/1024 [MB] (18 MBps) [2024-11-10T05:31:18.656Z] Copying: 217/1024 [MB] (17 MBps) [2024-11-10T05:31:19.624Z] Copying: 232/1024 [MB] (14 MBps) [2024-11-10T05:31:20.569Z] Copying: 246/1024 [MB] (14 MBps) [2024-11-10T05:31:21.509Z] Copying: 257/1024 [MB] (10 MBps) [2024-11-10T05:31:22.890Z] Copying: 267/1024 [MB] (10 MBps) [2024-11-10T05:31:23.830Z] Copying: 286/1024 [MB] (19 MBps) [2024-11-10T05:31:24.771Z] Copying: 302/1024 [MB] (15 MBps) [2024-11-10T05:31:25.712Z] Copying: 321/1024 [MB] (19 MBps) [2024-11-10T05:31:26.652Z] Copying: 338/1024 [MB] (16 MBps) [2024-11-10T05:31:27.593Z] Copying: 351/1024 [MB] (13 MBps) [2024-11-10T05:31:28.533Z] Copying: 368/1024 [MB] (16 MBps) [2024-11-10T05:31:29.915Z] Copying: 386/1024 [MB] (18 MBps) [2024-11-10T05:31:30.855Z] Copying: 403/1024 [MB] (17 MBps) [2024-11-10T05:31:31.799Z] Copying: 424/1024 [MB] (21 MBps) [2024-11-10T05:31:32.742Z] Copying: 435/1024 [MB] (10 MBps) [2024-11-10T05:31:33.685Z] Copying: 445/1024 [MB] (10 MBps) [2024-11-10T05:31:34.637Z] Copying: 456/1024 [MB] (10 MBps) [2024-11-10T05:31:35.580Z] Copying: 478/1024 [MB] (21 MBps) [2024-11-10T05:31:36.525Z] Copying: 489/1024 [MB] (10 MBps) [2024-11-10T05:31:37.912Z] Copying: 499/1024 [MB] (10 MBps) [2024-11-10T05:31:38.856Z] Copying: 510/1024 [MB] (10 MBps) [2024-11-10T05:31:39.801Z] Copying: 525/1024 [MB] (15 MBps) [2024-11-10T05:31:40.749Z] Copying: 536/1024 [MB] (10 MBps) [2024-11-10T05:31:41.691Z] Copying: 552/1024 [MB] (16 MBps) [2024-11-10T05:31:42.634Z] Copying: 570/1024 [MB] (17 MBps) [2024-11-10T05:31:43.577Z] Copying: 590/1024 [MB] (20 MBps) [2024-11-10T05:31:44.520Z] Copying: 604/1024 [MB] (14 MBps) [2024-11-10T05:31:45.907Z] Copying: 617/1024 [MB] (12 MBps) [2024-11-10T05:31:46.850Z] Copying: 631/1024 [MB] (14 MBps) [2024-11-10T05:31:47.794Z] Copying: 649/1024 [MB] (17 MBps) [2024-11-10T05:31:48.740Z] Copying: 664/1024 [MB] (15 MBps) [2024-11-10T05:31:49.686Z] Copying: 678/1024 [MB] (13 MBps) [2024-11-10T05:31:50.626Z] Copying: 693/1024 [MB] (15 MBps) [2024-11-10T05:31:51.569Z] Copying: 704/1024 [MB] (10 MBps) [2024-11-10T05:31:52.513Z] Copying: 714/1024 [MB] (10 MBps) [2024-11-10T05:31:53.901Z] Copying: 725/1024 [MB] (10 MBps) [2024-11-10T05:31:54.844Z] Copying: 735/1024 [MB] (10 MBps) [2024-11-10T05:31:55.786Z] Copying: 746/1024 [MB] (10 MBps) [2024-11-10T05:31:56.728Z] Copying: 756/1024 [MB] (10 MBps) [2024-11-10T05:31:57.672Z] Copying: 767/1024 [MB] (10 MBps) [2024-11-10T05:31:58.615Z] Copying: 777/1024 [MB] (10 MBps) [2024-11-10T05:31:59.558Z] Copying: 788/1024 [MB] (10 MBps) [2024-11-10T05:32:00.500Z] Copying: 798/1024 [MB] (10 MBps) [2024-11-10T05:32:01.886Z] Copying: 808/1024 [MB] (10 MBps) [2024-11-10T05:32:02.842Z] Copying: 819/1024 [MB] (10 MBps) [2024-11-10T05:32:03.786Z] Copying: 839/1024 [MB] (19 MBps) [2024-11-10T05:32:04.731Z] Copying: 855/1024 [MB] (15 MBps) [2024-11-10T05:32:05.677Z] Copying: 871/1024 [MB] (15 MBps) [2024-11-10T05:32:06.622Z] Copying: 890/1024 [MB] (19 MBps) [2024-11-10T05:32:07.566Z] Copying: 911/1024 [MB] (20 MBps) [2024-11-10T05:32:08.509Z] Copying: 930/1024 [MB] (19 MBps) [2024-11-10T05:32:09.897Z] Copying: 946/1024 [MB] (16 MBps) [2024-11-10T05:32:10.839Z] Copying: 964/1024 [MB] (17 MBps) [2024-11-10T05:32:11.784Z] Copying: 983/1024 [MB] (19 MBps) [2024-11-10T05:32:12.728Z] Copying: 998/1024 [MB] (15 MBps) [2024-11-10T05:32:13.302Z] Copying: 1012/1024 [MB] (13 MBps) [2024-11-10T05:32:13.302Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-10 05:32:13.131385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:20.066 [2024-11-10 05:32:13.131522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:20.066 [2024-11-10 05:32:13.131558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:30:20.066 [2024-11-10 05:32:13.131581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:20.066 [2024-11-10 05:32:13.131637] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:20.066 [2024-11-10 05:32:13.132722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:20.066 [2024-11-10 05:32:13.132903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:20.066 [2024-11-10 05:32:13.132937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.048 ms 00:30:20.066 [2024-11-10 05:32:13.132958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:20.066 [2024-11-10 05:32:13.133575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:20.066 [2024-11-10 05:32:13.133620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:20.066 [2024-11-10 05:32:13.133647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.536 ms 00:30:20.066 [2024-11-10 05:32:13.133669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:20.066 [2024-11-10 05:32:13.133740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:20.066 [2024-11-10 05:32:13.133778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:30:20.066 [2024-11-10 05:32:13.133810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:30:20.066 [2024-11-10 05:32:13.133832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:20.066 [2024-11-10 05:32:13.133963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:20.066 [2024-11-10 05:32:13.133986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:30:20.066 [2024-11-10 05:32:13.134045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:30:20.066 [2024-11-10 05:32:13.134064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:20.066 [2024-11-10 05:32:13.134100] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:20.066 [2024-11-10 05:32:13.134140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:30:20.066 [2024-11-10 05:32:13.134166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:20.066 [2024-11-10 05:32:13.134188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:20.066 [2024-11-10 05:32:13.134208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:20.066 [2024-11-10 05:32:13.134229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:20.066 [2024-11-10 05:32:13.134250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:20.066 [2024-11-10 05:32:13.134270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:20.066 [2024-11-10 05:32:13.134291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:20.066 [2024-11-10 05:32:13.134312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:20.066 [2024-11-10 05:32:13.134332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:20.066 [2024-11-10 05:32:13.134353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:20.066 [2024-11-10 05:32:13.134374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:20.066 [2024-11-10 05:32:13.134395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:20.066 [2024-11-10 05:32:13.134415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:20.066 [2024-11-10 05:32:13.134436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:20.066 [2024-11-10 05:32:13.134457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:20.066 [2024-11-10 05:32:13.134477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:20.066 [2024-11-10 05:32:13.134499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:20.066 [2024-11-10 05:32:13.134522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:20.066 [2024-11-10 05:32:13.134543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:20.066 [2024-11-10 05:32:13.134564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:20.066 [2024-11-10 05:32:13.134584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:20.066 [2024-11-10 05:32:13.134604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:20.066 [2024-11-10 05:32:13.134625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:20.066 [2024-11-10 05:32:13.134645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:20.066 [2024-11-10 05:32:13.134665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:20.066 [2024-11-10 05:32:13.134686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:20.066 [2024-11-10 05:32:13.134707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:20.066 [2024-11-10 05:32:13.134728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:20.066 [2024-11-10 05:32:13.134748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:20.066 [2024-11-10 05:32:13.134769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:20.066 [2024-11-10 05:32:13.134789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:20.066 [2024-11-10 05:32:13.134825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:20.066 [2024-11-10 05:32:13.134847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:20.066 [2024-11-10 05:32:13.134870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:20.066 [2024-11-10 05:32:13.134891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:20.066 [2024-11-10 05:32:13.134912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:20.066 [2024-11-10 05:32:13.134933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:20.066 [2024-11-10 05:32:13.134953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:20.066 [2024-11-10 05:32:13.134975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:20.066 [2024-11-10 05:32:13.135018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:20.066 [2024-11-10 05:32:13.135039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:20.066 [2024-11-10 05:32:13.135060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:20.066 [2024-11-10 05:32:13.135081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:20.066 [2024-11-10 05:32:13.135102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:20.066 [2024-11-10 05:32:13.135122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:20.066 [2024-11-10 05:32:13.135144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:20.066 [2024-11-10 05:32:13.135166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:20.066 [2024-11-10 05:32:13.135187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:20.066 [2024-11-10 05:32:13.135209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:20.066 [2024-11-10 05:32:13.135229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:20.066 [2024-11-10 05:32:13.135251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:20.066 [2024-11-10 05:32:13.135271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:20.066 [2024-11-10 05:32:13.135292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:20.066 [2024-11-10 05:32:13.135313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:20.066 [2024-11-10 05:32:13.135334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:20.066 [2024-11-10 05:32:13.135354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:20.066 [2024-11-10 05:32:13.135374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:20.066 [2024-11-10 05:32:13.135395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:20.067 [2024-11-10 05:32:13.135416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:20.067 [2024-11-10 05:32:13.135437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:20.067 [2024-11-10 05:32:13.135459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:20.067 [2024-11-10 05:32:13.135480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:20.067 [2024-11-10 05:32:13.135500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:20.067 [2024-11-10 05:32:13.135520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:20.067 [2024-11-10 05:32:13.135542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:20.067 [2024-11-10 05:32:13.135563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:20.067 [2024-11-10 05:32:13.135584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:20.067 [2024-11-10 05:32:13.135605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:20.067 [2024-11-10 05:32:13.135626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:20.067 [2024-11-10 05:32:13.135647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:20.067 [2024-11-10 05:32:13.135668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:20.067 [2024-11-10 05:32:13.135689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:20.067 [2024-11-10 05:32:13.135709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:20.067 [2024-11-10 05:32:13.135730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:20.067 [2024-11-10 05:32:13.135750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:20.067 [2024-11-10 05:32:13.135773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:20.067 [2024-11-10 05:32:13.135794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:20.067 [2024-11-10 05:32:13.135814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:20.067 [2024-11-10 05:32:13.135835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:20.067 [2024-11-10 05:32:13.135856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:20.067 [2024-11-10 05:32:13.135877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:20.067 [2024-11-10 05:32:13.135898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:20.067 [2024-11-10 05:32:13.135918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:20.067 [2024-11-10 05:32:13.135939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:20.067 [2024-11-10 05:32:13.135959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:20.067 [2024-11-10 05:32:13.136459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:20.067 [2024-11-10 05:32:13.136575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:20.067 [2024-11-10 05:32:13.136833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:20.067 [2024-11-10 05:32:13.136920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:20.067 [2024-11-10 05:32:13.137115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:20.067 [2024-11-10 05:32:13.137202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:20.067 [2024-11-10 05:32:13.137333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:20.067 [2024-11-10 05:32:13.137422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:20.067 [2024-11-10 05:32:13.137502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:20.067 [2024-11-10 05:32:13.137651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:20.067 [2024-11-10 05:32:13.137743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:20.067 [2024-11-10 05:32:13.137823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:20.067 [2024-11-10 05:32:13.138532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:20.067 [2024-11-10 05:32:13.139111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:20.067 [2024-11-10 05:32:13.139236] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:20.067 [2024-11-10 05:32:13.139284] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5d7d5206-51f2-499a-8e45-dc97d339ed69 00:30:20.067 [2024-11-10 05:32:13.139331] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:30:20.067 [2024-11-10 05:32:13.139351] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:30:20.067 [2024-11-10 05:32:13.139370] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:20.067 [2024-11-10 05:32:13.139390] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:20.067 [2024-11-10 05:32:13.139418] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:20.067 [2024-11-10 05:32:13.139481] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:20.067 [2024-11-10 05:32:13.139504] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:20.067 [2024-11-10 05:32:13.139522] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:20.067 [2024-11-10 05:32:13.139541] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:20.067 [2024-11-10 05:32:13.139590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:20.067 [2024-11-10 05:32:13.139646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:20.067 [2024-11-10 05:32:13.139703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.489 ms 00:30:20.067 [2024-11-10 05:32:13.139756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:20.067 [2024-11-10 05:32:13.143631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:20.067 [2024-11-10 05:32:13.143904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:20.067 [2024-11-10 05:32:13.143945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.724 ms 00:30:20.067 [2024-11-10 05:32:13.144044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:20.067 [2024-11-10 05:32:13.144236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:20.067 [2024-11-10 05:32:13.144273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:20.067 [2024-11-10 05:32:13.144297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:30:20.067 [2024-11-10 05:32:13.144318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:20.067 [2024-11-10 05:32:13.154091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:20.067 [2024-11-10 05:32:13.154261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:20.067 [2024-11-10 05:32:13.154280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:20.067 [2024-11-10 05:32:13.154289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:20.067 [2024-11-10 05:32:13.154357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:20.067 [2024-11-10 05:32:13.154366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:20.067 [2024-11-10 05:32:13.154374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:20.067 [2024-11-10 05:32:13.154382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:20.067 [2024-11-10 05:32:13.154454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:20.067 [2024-11-10 05:32:13.154465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:20.067 [2024-11-10 05:32:13.154473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:20.067 [2024-11-10 05:32:13.154481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:20.067 [2024-11-10 05:32:13.154498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:20.067 [2024-11-10 05:32:13.154511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:20.067 [2024-11-10 05:32:13.154519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:20.067 [2024-11-10 05:32:13.154527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:20.067 [2024-11-10 05:32:13.168139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:20.067 [2024-11-10 05:32:13.168189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:20.067 [2024-11-10 05:32:13.168199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:20.067 [2024-11-10 05:32:13.168207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:20.067 [2024-11-10 05:32:13.179024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:20.067 [2024-11-10 05:32:13.179068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:20.067 [2024-11-10 05:32:13.179078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:20.067 [2024-11-10 05:32:13.179087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:20.067 [2024-11-10 05:32:13.179138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:20.067 [2024-11-10 05:32:13.179148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:20.067 [2024-11-10 05:32:13.179156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:20.067 [2024-11-10 05:32:13.179164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:20.067 [2024-11-10 05:32:13.179199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:20.067 [2024-11-10 05:32:13.179208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:20.067 [2024-11-10 05:32:13.179222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:20.067 [2024-11-10 05:32:13.179230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:20.067 [2024-11-10 05:32:13.179282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:20.067 [2024-11-10 05:32:13.179294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:20.067 [2024-11-10 05:32:13.179303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:20.067 [2024-11-10 05:32:13.179310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:20.067 [2024-11-10 05:32:13.179339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:20.067 [2024-11-10 05:32:13.179354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:20.067 [2024-11-10 05:32:13.179362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:20.068 [2024-11-10 05:32:13.179371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:20.068 [2024-11-10 05:32:13.179411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:20.068 [2024-11-10 05:32:13.179423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:20.068 [2024-11-10 05:32:13.179431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:20.068 [2024-11-10 05:32:13.179439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:20.068 [2024-11-10 05:32:13.179486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:20.068 [2024-11-10 05:32:13.179496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:20.068 [2024-11-10 05:32:13.179505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:20.068 [2024-11-10 05:32:13.179512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:20.068 [2024-11-10 05:32:13.179650] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 48.275 ms, result 0 00:30:20.328 00:30:20.328 00:30:20.328 05:32:13 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:30:22.876 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:30:22.876 05:32:15 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:30:22.876 [2024-11-10 05:32:15.733575] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:30:22.876 [2024-11-10 05:32:15.733800] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94284 ] 00:30:22.876 [2024-11-10 05:32:15.899445] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:22.877 [2024-11-10 05:32:15.949134] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:30:22.877 [2024-11-10 05:32:16.057200] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:22.877 [2024-11-10 05:32:16.057271] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:23.144 [2024-11-10 05:32:16.218593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:23.144 [2024-11-10 05:32:16.218659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:23.144 [2024-11-10 05:32:16.218679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:23.144 [2024-11-10 05:32:16.218688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:23.144 [2024-11-10 05:32:16.218751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:23.144 [2024-11-10 05:32:16.218763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:23.144 [2024-11-10 05:32:16.218772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:30:23.144 [2024-11-10 05:32:16.218780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:23.144 [2024-11-10 05:32:16.218801] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:23.144 [2024-11-10 05:32:16.219110] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:23.144 [2024-11-10 05:32:16.219130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:23.144 [2024-11-10 05:32:16.219143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:23.144 [2024-11-10 05:32:16.219155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.332 ms 00:30:23.144 [2024-11-10 05:32:16.219169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:23.144 [2024-11-10 05:32:16.219460] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:30:23.144 [2024-11-10 05:32:16.219486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:23.144 [2024-11-10 05:32:16.219495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:23.144 [2024-11-10 05:32:16.219510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:30:23.144 [2024-11-10 05:32:16.219518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:23.144 [2024-11-10 05:32:16.219580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:23.144 [2024-11-10 05:32:16.219593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:23.144 [2024-11-10 05:32:16.219602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:30:23.144 [2024-11-10 05:32:16.219614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:23.144 [2024-11-10 05:32:16.219902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:23.144 [2024-11-10 05:32:16.219915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:23.144 [2024-11-10 05:32:16.219924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.212 ms 00:30:23.144 [2024-11-10 05:32:16.219931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:23.144 [2024-11-10 05:32:16.220052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:23.144 [2024-11-10 05:32:16.220065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:23.144 [2024-11-10 05:32:16.220074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:30:23.144 [2024-11-10 05:32:16.220086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:23.144 [2024-11-10 05:32:16.220111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:23.144 [2024-11-10 05:32:16.220126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:23.144 [2024-11-10 05:32:16.220134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:23.144 [2024-11-10 05:32:16.220142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:23.144 [2024-11-10 05:32:16.220164] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:23.144 [2024-11-10 05:32:16.222314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:23.144 [2024-11-10 05:32:16.222360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:23.144 [2024-11-10 05:32:16.222374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.155 ms 00:30:23.144 [2024-11-10 05:32:16.222383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:23.144 [2024-11-10 05:32:16.222419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:23.144 [2024-11-10 05:32:16.222428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:23.144 [2024-11-10 05:32:16.222438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:30:23.144 [2024-11-10 05:32:16.222446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:23.144 [2024-11-10 05:32:16.222505] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:23.144 [2024-11-10 05:32:16.222533] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:23.144 [2024-11-10 05:32:16.222576] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:23.144 [2024-11-10 05:32:16.222594] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:30:23.144 [2024-11-10 05:32:16.222699] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:23.144 [2024-11-10 05:32:16.222712] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:23.144 [2024-11-10 05:32:16.222724] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:23.144 [2024-11-10 05:32:16.222735] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:23.144 [2024-11-10 05:32:16.222744] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:23.144 [2024-11-10 05:32:16.222755] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:23.144 [2024-11-10 05:32:16.222765] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:23.144 [2024-11-10 05:32:16.222773] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:23.145 [2024-11-10 05:32:16.222781] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:23.145 [2024-11-10 05:32:16.222792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:23.145 [2024-11-10 05:32:16.222799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:23.145 [2024-11-10 05:32:16.222807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.289 ms 00:30:23.145 [2024-11-10 05:32:16.222819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:23.145 [2024-11-10 05:32:16.222902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:23.145 [2024-11-10 05:32:16.222910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:23.145 [2024-11-10 05:32:16.222917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:30:23.145 [2024-11-10 05:32:16.222927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:23.145 [2024-11-10 05:32:16.223063] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:23.145 [2024-11-10 05:32:16.223076] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:23.145 [2024-11-10 05:32:16.223085] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:23.145 [2024-11-10 05:32:16.223096] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:23.145 [2024-11-10 05:32:16.223104] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:23.145 [2024-11-10 05:32:16.223120] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:23.145 [2024-11-10 05:32:16.223128] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:23.145 [2024-11-10 05:32:16.223136] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:23.145 [2024-11-10 05:32:16.223143] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:23.145 [2024-11-10 05:32:16.223151] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:23.145 [2024-11-10 05:32:16.223158] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:23.145 [2024-11-10 05:32:16.223168] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:23.145 [2024-11-10 05:32:16.223176] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:23.145 [2024-11-10 05:32:16.223183] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:23.145 [2024-11-10 05:32:16.223190] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:23.145 [2024-11-10 05:32:16.223197] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:23.145 [2024-11-10 05:32:16.223204] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:23.145 [2024-11-10 05:32:16.223211] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:23.145 [2024-11-10 05:32:16.223217] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:23.145 [2024-11-10 05:32:16.223227] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:23.145 [2024-11-10 05:32:16.223235] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:23.145 [2024-11-10 05:32:16.223242] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:23.145 [2024-11-10 05:32:16.223249] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:23.145 [2024-11-10 05:32:16.223256] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:23.145 [2024-11-10 05:32:16.223263] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:23.145 [2024-11-10 05:32:16.223269] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:23.145 [2024-11-10 05:32:16.223275] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:23.145 [2024-11-10 05:32:16.223282] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:23.145 [2024-11-10 05:32:16.223288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:23.145 [2024-11-10 05:32:16.223294] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:23.145 [2024-11-10 05:32:16.223301] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:23.145 [2024-11-10 05:32:16.223308] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:23.145 [2024-11-10 05:32:16.223315] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:23.145 [2024-11-10 05:32:16.223321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:23.145 [2024-11-10 05:32:16.223328] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:23.145 [2024-11-10 05:32:16.223340] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:23.145 [2024-11-10 05:32:16.223347] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:23.145 [2024-11-10 05:32:16.223354] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:23.145 [2024-11-10 05:32:16.223360] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:23.145 [2024-11-10 05:32:16.223367] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:23.145 [2024-11-10 05:32:16.223373] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:23.145 [2024-11-10 05:32:16.223380] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:23.145 [2024-11-10 05:32:16.223388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:23.145 [2024-11-10 05:32:16.223395] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:23.145 [2024-11-10 05:32:16.223407] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:23.145 [2024-11-10 05:32:16.223415] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:23.145 [2024-11-10 05:32:16.223422] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:23.145 [2024-11-10 05:32:16.223430] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:23.145 [2024-11-10 05:32:16.223437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:23.145 [2024-11-10 05:32:16.223444] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:23.145 [2024-11-10 05:32:16.223450] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:23.145 [2024-11-10 05:32:16.223459] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:23.145 [2024-11-10 05:32:16.223467] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:23.145 [2024-11-10 05:32:16.223476] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:23.145 [2024-11-10 05:32:16.223488] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:23.145 [2024-11-10 05:32:16.223496] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:23.145 [2024-11-10 05:32:16.223503] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:23.145 [2024-11-10 05:32:16.223510] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:23.145 [2024-11-10 05:32:16.223518] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:23.145 [2024-11-10 05:32:16.223525] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:23.145 [2024-11-10 05:32:16.223532] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:23.145 [2024-11-10 05:32:16.223539] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:23.145 [2024-11-10 05:32:16.223547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:23.145 [2024-11-10 05:32:16.223555] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:23.145 [2024-11-10 05:32:16.223561] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:23.145 [2024-11-10 05:32:16.223568] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:23.145 [2024-11-10 05:32:16.223575] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:23.145 [2024-11-10 05:32:16.223585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:23.145 [2024-11-10 05:32:16.223593] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:23.145 [2024-11-10 05:32:16.223600] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:23.145 [2024-11-10 05:32:16.223609] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:23.145 [2024-11-10 05:32:16.223618] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:23.145 [2024-11-10 05:32:16.223625] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:23.145 [2024-11-10 05:32:16.223633] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:23.145 [2024-11-10 05:32:16.223640] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:23.145 [2024-11-10 05:32:16.223648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:23.145 [2024-11-10 05:32:16.223657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:23.146 [2024-11-10 05:32:16.223664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.682 ms 00:30:23.146 [2024-11-10 05:32:16.223676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:23.146 [2024-11-10 05:32:16.244548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:23.146 [2024-11-10 05:32:16.244804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:23.146 [2024-11-10 05:32:16.244843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.823 ms 00:30:23.146 [2024-11-10 05:32:16.244856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:23.146 [2024-11-10 05:32:16.245020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:23.146 [2024-11-10 05:32:16.245036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:23.146 [2024-11-10 05:32:16.245050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:30:23.146 [2024-11-10 05:32:16.245061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:23.146 [2024-11-10 05:32:16.257354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:23.146 [2024-11-10 05:32:16.257412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:23.146 [2024-11-10 05:32:16.257426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.201 ms 00:30:23.146 [2024-11-10 05:32:16.257434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:23.146 [2024-11-10 05:32:16.257472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:23.146 [2024-11-10 05:32:16.257480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:23.146 [2024-11-10 05:32:16.257489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:23.146 [2024-11-10 05:32:16.257501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:23.146 [2024-11-10 05:32:16.257603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:23.146 [2024-11-10 05:32:16.257619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:23.146 [2024-11-10 05:32:16.257627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:30:23.146 [2024-11-10 05:32:16.257640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:23.146 [2024-11-10 05:32:16.257765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:23.146 [2024-11-10 05:32:16.257774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:23.146 [2024-11-10 05:32:16.257782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:30:23.146 [2024-11-10 05:32:16.257793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:23.146 [2024-11-10 05:32:16.265219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:23.146 [2024-11-10 05:32:16.265265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:23.146 [2024-11-10 05:32:16.265276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.404 ms 00:30:23.146 [2024-11-10 05:32:16.265291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:23.146 [2024-11-10 05:32:16.265414] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:30:23.146 [2024-11-10 05:32:16.265427] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:23.146 [2024-11-10 05:32:16.265437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:23.146 [2024-11-10 05:32:16.265452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:23.146 [2024-11-10 05:32:16.265462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:30:23.146 [2024-11-10 05:32:16.265469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:23.146 [2024-11-10 05:32:16.277819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:23.146 [2024-11-10 05:32:16.277867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:23.146 [2024-11-10 05:32:16.277879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.331 ms 00:30:23.146 [2024-11-10 05:32:16.277886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:23.146 [2024-11-10 05:32:16.278040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:23.146 [2024-11-10 05:32:16.278056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:23.146 [2024-11-10 05:32:16.278066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:30:23.146 [2024-11-10 05:32:16.278073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:23.146 [2024-11-10 05:32:16.278133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:23.146 [2024-11-10 05:32:16.278141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:23.146 [2024-11-10 05:32:16.278154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:30:23.146 [2024-11-10 05:32:16.278165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:23.146 [2024-11-10 05:32:16.278482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:23.146 [2024-11-10 05:32:16.278493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:23.146 [2024-11-10 05:32:16.278501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.280 ms 00:30:23.146 [2024-11-10 05:32:16.278516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:23.146 [2024-11-10 05:32:16.278535] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:30:23.146 [2024-11-10 05:32:16.278544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:23.146 [2024-11-10 05:32:16.278556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:23.146 [2024-11-10 05:32:16.278563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:30:23.146 [2024-11-10 05:32:16.278574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:23.146 [2024-11-10 05:32:16.287954] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:23.146 [2024-11-10 05:32:16.288273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:23.146 [2024-11-10 05:32:16.288309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:23.146 [2024-11-10 05:32:16.288420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.681 ms 00:30:23.146 [2024-11-10 05:32:16.288443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:23.146 [2024-11-10 05:32:16.290935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:23.146 [2024-11-10 05:32:16.291097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:23.146 [2024-11-10 05:32:16.291158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.452 ms 00:30:23.146 [2024-11-10 05:32:16.291180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:23.146 [2024-11-10 05:32:16.291309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:23.146 [2024-11-10 05:32:16.291337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:23.146 [2024-11-10 05:32:16.291357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:30:23.146 [2024-11-10 05:32:16.291439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:23.146 [2024-11-10 05:32:16.291485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:23.146 [2024-11-10 05:32:16.291514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:23.146 [2024-11-10 05:32:16.291534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:23.146 [2024-11-10 05:32:16.291553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:23.146 [2024-11-10 05:32:16.291603] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:23.146 [2024-11-10 05:32:16.291670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:23.146 [2024-11-10 05:32:16.291698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:23.146 [2024-11-10 05:32:16.291719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:30:23.146 [2024-11-10 05:32:16.291754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:23.146 [2024-11-10 05:32:16.298486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:23.146 [2024-11-10 05:32:16.298665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:23.146 [2024-11-10 05:32:16.299223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.683 ms 00:30:23.146 [2024-11-10 05:32:16.299252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:23.146 [2024-11-10 05:32:16.299361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:23.146 [2024-11-10 05:32:16.299373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:23.146 [2024-11-10 05:32:16.299384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:30:23.146 [2024-11-10 05:32:16.299391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:23.146 [2024-11-10 05:32:16.300680] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 81.623 ms, result 0 00:30:24.089  [2024-11-10T05:32:18.710Z] Copying: 10196/1048576 [kB] (10196 kBps) [2024-11-10T05:32:19.654Z] Copying: 23/1024 [MB] (13 MBps) [2024-11-10T05:32:20.597Z] Copying: 45/1024 [MB] (22 MBps) [2024-11-10T05:32:21.541Z] Copying: 86/1024 [MB] (41 MBps) [2024-11-10T05:32:22.486Z] Copying: 128/1024 [MB] (41 MBps) [2024-11-10T05:32:23.430Z] Copying: 140/1024 [MB] (12 MBps) [2024-11-10T05:32:24.374Z] Copying: 151/1024 [MB] (10 MBps) [2024-11-10T05:32:25.318Z] Copying: 167/1024 [MB] (15 MBps) [2024-11-10T05:32:26.705Z] Copying: 177/1024 [MB] (10 MBps) [2024-11-10T05:32:27.650Z] Copying: 189/1024 [MB] (11 MBps) [2024-11-10T05:32:28.595Z] Copying: 200/1024 [MB] (11 MBps) [2024-11-10T05:32:29.538Z] Copying: 210/1024 [MB] (10 MBps) [2024-11-10T05:32:30.487Z] Copying: 232/1024 [MB] (22 MBps) [2024-11-10T05:32:31.431Z] Copying: 270/1024 [MB] (38 MBps) [2024-11-10T05:32:32.376Z] Copying: 283/1024 [MB] (12 MBps) [2024-11-10T05:32:33.320Z] Copying: 295/1024 [MB] (11 MBps) [2024-11-10T05:32:34.707Z] Copying: 308/1024 [MB] (13 MBps) [2024-11-10T05:32:35.650Z] Copying: 324/1024 [MB] (15 MBps) [2024-11-10T05:32:36.595Z] Copying: 345/1024 [MB] (20 MBps) [2024-11-10T05:32:37.538Z] Copying: 365/1024 [MB] (20 MBps) [2024-11-10T05:32:38.480Z] Copying: 378/1024 [MB] (12 MBps) [2024-11-10T05:32:39.424Z] Copying: 396/1024 [MB] (18 MBps) [2024-11-10T05:32:40.366Z] Copying: 416/1024 [MB] (19 MBps) [2024-11-10T05:32:41.754Z] Copying: 432/1024 [MB] (16 MBps) [2024-11-10T05:32:42.327Z] Copying: 450/1024 [MB] (18 MBps) [2024-11-10T05:32:43.715Z] Copying: 463/1024 [MB] (12 MBps) [2024-11-10T05:32:44.674Z] Copying: 483/1024 [MB] (20 MBps) [2024-11-10T05:32:45.635Z] Copying: 502/1024 [MB] (18 MBps) [2024-11-10T05:32:46.580Z] Copying: 517/1024 [MB] (15 MBps) [2024-11-10T05:32:47.525Z] Copying: 528/1024 [MB] (10 MBps) [2024-11-10T05:32:48.469Z] Copying: 538/1024 [MB] (10 MBps) [2024-11-10T05:32:49.413Z] Copying: 565/1024 [MB] (26 MBps) [2024-11-10T05:32:50.355Z] Copying: 604/1024 [MB] (39 MBps) [2024-11-10T05:32:51.742Z] Copying: 615/1024 [MB] (10 MBps) [2024-11-10T05:32:52.685Z] Copying: 625/1024 [MB] (10 MBps) [2024-11-10T05:32:53.629Z] Copying: 654/1024 [MB] (28 MBps) [2024-11-10T05:32:54.573Z] Copying: 673/1024 [MB] (19 MBps) [2024-11-10T05:32:55.516Z] Copying: 685/1024 [MB] (11 MBps) [2024-11-10T05:32:56.458Z] Copying: 704/1024 [MB] (18 MBps) [2024-11-10T05:32:57.401Z] Copying: 723/1024 [MB] (19 MBps) [2024-11-10T05:32:58.345Z] Copying: 740/1024 [MB] (16 MBps) [2024-11-10T05:32:59.321Z] Copying: 762/1024 [MB] (21 MBps) [2024-11-10T05:33:00.707Z] Copying: 773/1024 [MB] (11 MBps) [2024-11-10T05:33:01.651Z] Copying: 793/1024 [MB] (19 MBps) [2024-11-10T05:33:02.594Z] Copying: 834/1024 [MB] (40 MBps) [2024-11-10T05:33:03.539Z] Copying: 870/1024 [MB] (36 MBps) [2024-11-10T05:33:04.483Z] Copying: 886/1024 [MB] (15 MBps) [2024-11-10T05:33:05.428Z] Copying: 897/1024 [MB] (11 MBps) [2024-11-10T05:33:06.372Z] Copying: 909/1024 [MB] (11 MBps) [2024-11-10T05:33:07.316Z] Copying: 935/1024 [MB] (26 MBps) [2024-11-10T05:33:08.703Z] Copying: 955/1024 [MB] (19 MBps) [2024-11-10T05:33:09.647Z] Copying: 974/1024 [MB] (18 MBps) [2024-11-10T05:33:10.590Z] Copying: 998/1024 [MB] (24 MBps) [2024-11-10T05:33:11.535Z] Copying: 1011/1024 [MB] (13 MBps) [2024-11-10T05:33:12.480Z] Copying: 1023/1024 [MB] (11 MBps) [2024-11-10T05:33:12.480Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-11-10 05:33:12.124907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.244 [2024-11-10 05:33:12.125020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:19.244 [2024-11-10 05:33:12.125039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:19.244 [2024-11-10 05:33:12.125048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.244 [2024-11-10 05:33:12.126530] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:19.244 [2024-11-10 05:33:12.130284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.244 [2024-11-10 05:33:12.130336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:19.244 [2024-11-10 05:33:12.130348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.716 ms 00:31:19.244 [2024-11-10 05:33:12.130358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.244 [2024-11-10 05:33:12.140421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.244 [2024-11-10 05:33:12.140470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:19.244 [2024-11-10 05:33:12.140490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.774 ms 00:31:19.244 [2024-11-10 05:33:12.140499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.244 [2024-11-10 05:33:12.140529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.244 [2024-11-10 05:33:12.140539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:31:19.244 [2024-11-10 05:33:12.140548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:19.244 [2024-11-10 05:33:12.140556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.244 [2024-11-10 05:33:12.140615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.244 [2024-11-10 05:33:12.140626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:31:19.244 [2024-11-10 05:33:12.140635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:31:19.244 [2024-11-10 05:33:12.140646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.244 [2024-11-10 05:33:12.140660] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:19.244 [2024-11-10 05:33:12.140672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 128768 / 261120 wr_cnt: 1 state: open 00:31:19.244 [2024-11-10 05:33:12.140683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:19.244 [2024-11-10 05:33:12.140690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:19.244 [2024-11-10 05:33:12.140698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:19.244 [2024-11-10 05:33:12.140706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:19.244 [2024-11-10 05:33:12.140713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:19.244 [2024-11-10 05:33:12.140720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:19.244 [2024-11-10 05:33:12.140728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:19.244 [2024-11-10 05:33:12.140736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:19.244 [2024-11-10 05:33:12.140743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:19.245 [2024-11-10 05:33:12.140751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:19.245 [2024-11-10 05:33:12.140759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:19.245 [2024-11-10 05:33:12.140769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:19.245 [2024-11-10 05:33:12.140777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:19.245 [2024-11-10 05:33:12.140785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:19.245 [2024-11-10 05:33:12.140793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:19.245 [2024-11-10 05:33:12.140801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:19.245 [2024-11-10 05:33:12.140809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:19.245 [2024-11-10 05:33:12.140817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:19.245 [2024-11-10 05:33:12.140825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:19.245 [2024-11-10 05:33:12.140834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:19.245 [2024-11-10 05:33:12.140844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:19.245 [2024-11-10 05:33:12.140852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:19.245 [2024-11-10 05:33:12.140859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:19.245 [2024-11-10 05:33:12.140867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:19.245 [2024-11-10 05:33:12.140875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:19.245 [2024-11-10 05:33:12.140883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:19.245 [2024-11-10 05:33:12.140891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:19.245 [2024-11-10 05:33:12.140898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:19.245 [2024-11-10 05:33:12.140906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:19.245 [2024-11-10 05:33:12.140914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:19.245 [2024-11-10 05:33:12.140921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:19.245 [2024-11-10 05:33:12.140937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:19.245 [2024-11-10 05:33:12.140945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:19.245 [2024-11-10 05:33:12.140957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:19.245 [2024-11-10 05:33:12.140966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:19.245 [2024-11-10 05:33:12.140974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:19.245 [2024-11-10 05:33:12.140982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:19.245 [2024-11-10 05:33:12.141015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:19.245 [2024-11-10 05:33:12.141024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:19.245 [2024-11-10 05:33:12.141032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:19.245 [2024-11-10 05:33:12.141040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:19.245 [2024-11-10 05:33:12.141049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:19.245 [2024-11-10 05:33:12.141057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:19.245 [2024-11-10 05:33:12.141075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:19.245 [2024-11-10 05:33:12.141083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:19.245 [2024-11-10 05:33:12.141092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:19.245 [2024-11-10 05:33:12.141100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:19.245 [2024-11-10 05:33:12.141109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:19.245 [2024-11-10 05:33:12.141117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:19.245 [2024-11-10 05:33:12.141125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:19.245 [2024-11-10 05:33:12.141133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:19.245 [2024-11-10 05:33:12.141141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:19.245 [2024-11-10 05:33:12.141149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:19.245 [2024-11-10 05:33:12.141157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:19.245 [2024-11-10 05:33:12.141165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:19.245 [2024-11-10 05:33:12.141174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:19.245 [2024-11-10 05:33:12.141182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:19.245 [2024-11-10 05:33:12.141190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:19.245 [2024-11-10 05:33:12.141198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:19.245 [2024-11-10 05:33:12.141206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:19.245 [2024-11-10 05:33:12.141214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:19.245 [2024-11-10 05:33:12.141222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:19.245 [2024-11-10 05:33:12.141229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:19.245 [2024-11-10 05:33:12.141238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:19.245 [2024-11-10 05:33:12.141247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:19.245 [2024-11-10 05:33:12.141254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:19.245 [2024-11-10 05:33:12.141262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:19.245 [2024-11-10 05:33:12.141270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:19.245 [2024-11-10 05:33:12.141277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:19.245 [2024-11-10 05:33:12.141284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:19.245 [2024-11-10 05:33:12.141291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:19.245 [2024-11-10 05:33:12.141299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:19.245 [2024-11-10 05:33:12.141306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:19.245 [2024-11-10 05:33:12.141314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:19.245 [2024-11-10 05:33:12.141321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:19.245 [2024-11-10 05:33:12.141330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:19.245 [2024-11-10 05:33:12.141338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:19.245 [2024-11-10 05:33:12.141345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:19.245 [2024-11-10 05:33:12.141353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:19.245 [2024-11-10 05:33:12.141360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:19.245 [2024-11-10 05:33:12.141368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:19.245 [2024-11-10 05:33:12.141375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:19.245 [2024-11-10 05:33:12.141382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:19.245 [2024-11-10 05:33:12.141390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:19.245 [2024-11-10 05:33:12.141400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:19.245 [2024-11-10 05:33:12.141407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:19.245 [2024-11-10 05:33:12.141415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:19.245 [2024-11-10 05:33:12.141423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:19.245 [2024-11-10 05:33:12.141430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:19.245 [2024-11-10 05:33:12.141437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:19.245 [2024-11-10 05:33:12.141445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:19.245 [2024-11-10 05:33:12.141453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:19.245 [2024-11-10 05:33:12.141462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:19.245 [2024-11-10 05:33:12.141469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:19.245 [2024-11-10 05:33:12.141478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:19.245 [2024-11-10 05:33:12.141485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:19.245 [2024-11-10 05:33:12.141493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:19.245 [2024-11-10 05:33:12.141501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:19.245 [2024-11-10 05:33:12.141510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:19.246 [2024-11-10 05:33:12.141526] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:19.246 [2024-11-10 05:33:12.141534] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5d7d5206-51f2-499a-8e45-dc97d339ed69 00:31:19.246 [2024-11-10 05:33:12.141547] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 128768 00:31:19.246 [2024-11-10 05:33:12.141555] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 128800 00:31:19.246 [2024-11-10 05:33:12.141568] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 128768 00:31:19.246 [2024-11-10 05:33:12.141575] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0002 00:31:19.246 [2024-11-10 05:33:12.141583] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:19.246 [2024-11-10 05:33:12.141591] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:19.246 [2024-11-10 05:33:12.141607] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:19.246 [2024-11-10 05:33:12.141614] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:19.246 [2024-11-10 05:33:12.141620] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:19.246 [2024-11-10 05:33:12.141628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.246 [2024-11-10 05:33:12.141636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:19.246 [2024-11-10 05:33:12.141644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.968 ms 00:31:19.246 [2024-11-10 05:33:12.141652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.246 [2024-11-10 05:33:12.144043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.246 [2024-11-10 05:33:12.144075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:19.246 [2024-11-10 05:33:12.144094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.373 ms 00:31:19.246 [2024-11-10 05:33:12.144103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.246 [2024-11-10 05:33:12.144224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.246 [2024-11-10 05:33:12.144234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:19.246 [2024-11-10 05:33:12.144247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:31:19.246 [2024-11-10 05:33:12.144254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.246 [2024-11-10 05:33:12.150979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:19.246 [2024-11-10 05:33:12.151055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:19.246 [2024-11-10 05:33:12.151070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:19.246 [2024-11-10 05:33:12.151078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.246 [2024-11-10 05:33:12.151142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:19.246 [2024-11-10 05:33:12.151152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:19.246 [2024-11-10 05:33:12.151160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:19.246 [2024-11-10 05:33:12.151168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.246 [2024-11-10 05:33:12.151206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:19.246 [2024-11-10 05:33:12.151216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:19.246 [2024-11-10 05:33:12.151225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:19.246 [2024-11-10 05:33:12.151236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.246 [2024-11-10 05:33:12.151253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:19.246 [2024-11-10 05:33:12.151261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:19.246 [2024-11-10 05:33:12.151269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:19.246 [2024-11-10 05:33:12.151277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.246 [2024-11-10 05:33:12.165175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:19.246 [2024-11-10 05:33:12.165230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:19.246 [2024-11-10 05:33:12.165243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:19.246 [2024-11-10 05:33:12.165258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.246 [2024-11-10 05:33:12.176417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:19.246 [2024-11-10 05:33:12.176467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:19.246 [2024-11-10 05:33:12.176478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:19.246 [2024-11-10 05:33:12.176487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.246 [2024-11-10 05:33:12.176537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:19.246 [2024-11-10 05:33:12.176548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:19.246 [2024-11-10 05:33:12.176566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:19.246 [2024-11-10 05:33:12.176576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.246 [2024-11-10 05:33:12.176615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:19.246 [2024-11-10 05:33:12.176625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:19.246 [2024-11-10 05:33:12.176634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:19.246 [2024-11-10 05:33:12.176642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.246 [2024-11-10 05:33:12.176702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:19.246 [2024-11-10 05:33:12.176711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:19.246 [2024-11-10 05:33:12.176720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:19.246 [2024-11-10 05:33:12.176728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.246 [2024-11-10 05:33:12.176752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:19.246 [2024-11-10 05:33:12.176771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:19.246 [2024-11-10 05:33:12.176780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:19.246 [2024-11-10 05:33:12.176788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.246 [2024-11-10 05:33:12.176827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:19.246 [2024-11-10 05:33:12.176836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:19.246 [2024-11-10 05:33:12.176845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:19.246 [2024-11-10 05:33:12.176854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.246 [2024-11-10 05:33:12.176905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:19.246 [2024-11-10 05:33:12.176915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:19.246 [2024-11-10 05:33:12.176924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:19.246 [2024-11-10 05:33:12.176932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.246 [2024-11-10 05:33:12.177084] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 53.825 ms, result 0 00:31:20.225 00:31:20.225 00:31:20.225 05:33:13 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:31:20.225 [2024-11-10 05:33:13.306170] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:31:20.225 [2024-11-10 05:33:13.306326] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94863 ] 00:31:20.486 [2024-11-10 05:33:13.460429] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:20.486 [2024-11-10 05:33:13.511050] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:31:20.486 [2024-11-10 05:33:13.626852] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:20.486 [2024-11-10 05:33:13.626941] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:20.749 [2024-11-10 05:33:13.786770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:20.749 [2024-11-10 05:33:13.786832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:20.749 [2024-11-10 05:33:13.786854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:31:20.749 [2024-11-10 05:33:13.786863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.749 [2024-11-10 05:33:13.786926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:20.749 [2024-11-10 05:33:13.786937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:20.749 [2024-11-10 05:33:13.786949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:31:20.749 [2024-11-10 05:33:13.786958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.749 [2024-11-10 05:33:13.786985] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:20.749 [2024-11-10 05:33:13.787281] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:20.749 [2024-11-10 05:33:13.787298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:20.749 [2024-11-10 05:33:13.787306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:20.749 [2024-11-10 05:33:13.787315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.324 ms 00:31:20.749 [2024-11-10 05:33:13.787326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.749 [2024-11-10 05:33:13.787609] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:31:20.749 [2024-11-10 05:33:13.787642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:20.749 [2024-11-10 05:33:13.787651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:20.749 [2024-11-10 05:33:13.787661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:31:20.749 [2024-11-10 05:33:13.787669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.749 [2024-11-10 05:33:13.787729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:20.749 [2024-11-10 05:33:13.787741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:20.749 [2024-11-10 05:33:13.787749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:31:20.749 [2024-11-10 05:33:13.787757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.749 [2024-11-10 05:33:13.788051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:20.749 [2024-11-10 05:33:13.788064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:20.749 [2024-11-10 05:33:13.788073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.254 ms 00:31:20.749 [2024-11-10 05:33:13.788081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.749 [2024-11-10 05:33:13.788171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:20.749 [2024-11-10 05:33:13.788187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:20.749 [2024-11-10 05:33:13.788195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:31:20.749 [2024-11-10 05:33:13.788208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.749 [2024-11-10 05:33:13.788277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:20.749 [2024-11-10 05:33:13.788293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:20.749 [2024-11-10 05:33:13.788302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:31:20.749 [2024-11-10 05:33:13.788310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.749 [2024-11-10 05:33:13.788332] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:20.749 [2024-11-10 05:33:13.790545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:20.749 [2024-11-10 05:33:13.790583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:20.750 [2024-11-10 05:33:13.790594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.218 ms 00:31:20.750 [2024-11-10 05:33:13.790602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.750 [2024-11-10 05:33:13.790637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:20.750 [2024-11-10 05:33:13.790646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:20.750 [2024-11-10 05:33:13.790661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:31:20.750 [2024-11-10 05:33:13.790669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.750 [2024-11-10 05:33:13.790720] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:20.750 [2024-11-10 05:33:13.790745] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:20.750 [2024-11-10 05:33:13.790790] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:20.750 [2024-11-10 05:33:13.790809] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:31:20.750 [2024-11-10 05:33:13.790914] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:20.750 [2024-11-10 05:33:13.790926] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:20.750 [2024-11-10 05:33:13.790937] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:20.750 [2024-11-10 05:33:13.790949] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:20.750 [2024-11-10 05:33:13.790958] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:20.750 [2024-11-10 05:33:13.790966] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:20.750 [2024-11-10 05:33:13.790977] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:20.750 [2024-11-10 05:33:13.790984] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:20.750 [2024-11-10 05:33:13.791012] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:20.750 [2024-11-10 05:33:13.791021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:20.750 [2024-11-10 05:33:13.791032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:20.750 [2024-11-10 05:33:13.791040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.302 ms 00:31:20.750 [2024-11-10 05:33:13.791052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.750 [2024-11-10 05:33:13.791135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:20.750 [2024-11-10 05:33:13.791144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:20.750 [2024-11-10 05:33:13.791152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:31:20.750 [2024-11-10 05:33:13.791162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.750 [2024-11-10 05:33:13.791269] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:20.750 [2024-11-10 05:33:13.791281] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:20.750 [2024-11-10 05:33:13.791289] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:20.750 [2024-11-10 05:33:13.791296] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:20.750 [2024-11-10 05:33:13.791304] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:20.750 [2024-11-10 05:33:13.791318] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:20.750 [2024-11-10 05:33:13.791326] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:20.750 [2024-11-10 05:33:13.791332] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:20.750 [2024-11-10 05:33:13.791342] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:20.750 [2024-11-10 05:33:13.791348] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:20.750 [2024-11-10 05:33:13.791360] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:20.750 [2024-11-10 05:33:13.791367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:20.750 [2024-11-10 05:33:13.791374] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:20.750 [2024-11-10 05:33:13.791381] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:20.750 [2024-11-10 05:33:13.791388] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:20.750 [2024-11-10 05:33:13.791395] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:20.750 [2024-11-10 05:33:13.791402] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:20.750 [2024-11-10 05:33:13.791409] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:20.750 [2024-11-10 05:33:13.791416] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:20.750 [2024-11-10 05:33:13.791423] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:20.750 [2024-11-10 05:33:13.791430] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:20.750 [2024-11-10 05:33:13.791437] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:20.750 [2024-11-10 05:33:13.791445] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:20.750 [2024-11-10 05:33:13.791452] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:20.750 [2024-11-10 05:33:13.791460] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:20.750 [2024-11-10 05:33:13.791467] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:20.750 [2024-11-10 05:33:13.791474] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:20.750 [2024-11-10 05:33:13.791480] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:20.750 [2024-11-10 05:33:13.791487] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:20.750 [2024-11-10 05:33:13.791494] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:20.750 [2024-11-10 05:33:13.791501] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:20.750 [2024-11-10 05:33:13.791508] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:20.750 [2024-11-10 05:33:13.791514] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:20.750 [2024-11-10 05:33:13.791521] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:20.750 [2024-11-10 05:33:13.791528] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:20.750 [2024-11-10 05:33:13.791534] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:20.750 [2024-11-10 05:33:13.791540] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:20.750 [2024-11-10 05:33:13.791546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:20.750 [2024-11-10 05:33:13.791552] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:20.750 [2024-11-10 05:33:13.791559] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:20.750 [2024-11-10 05:33:13.791568] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:20.750 [2024-11-10 05:33:13.791575] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:20.750 [2024-11-10 05:33:13.791582] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:20.750 [2024-11-10 05:33:13.791590] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:20.750 [2024-11-10 05:33:13.791598] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:20.750 [2024-11-10 05:33:13.791606] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:20.750 [2024-11-10 05:33:13.791613] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:20.750 [2024-11-10 05:33:13.791622] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:20.750 [2024-11-10 05:33:13.791629] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:20.750 [2024-11-10 05:33:13.791636] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:20.750 [2024-11-10 05:33:13.791643] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:20.750 [2024-11-10 05:33:13.791649] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:20.750 [2024-11-10 05:33:13.791656] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:20.750 [2024-11-10 05:33:13.791664] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:20.750 [2024-11-10 05:33:13.791679] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:20.750 [2024-11-10 05:33:13.791687] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:20.750 [2024-11-10 05:33:13.791697] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:20.750 [2024-11-10 05:33:13.791704] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:20.750 [2024-11-10 05:33:13.791711] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:20.750 [2024-11-10 05:33:13.791718] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:20.750 [2024-11-10 05:33:13.791725] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:20.750 [2024-11-10 05:33:13.791732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:20.750 [2024-11-10 05:33:13.791739] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:20.750 [2024-11-10 05:33:13.791747] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:20.750 [2024-11-10 05:33:13.791753] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:20.750 [2024-11-10 05:33:13.791760] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:20.750 [2024-11-10 05:33:13.791767] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:20.750 [2024-11-10 05:33:13.791774] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:20.750 [2024-11-10 05:33:13.791781] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:20.750 [2024-11-10 05:33:13.791789] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:20.750 [2024-11-10 05:33:13.791796] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:20.750 [2024-11-10 05:33:13.791808] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:20.751 [2024-11-10 05:33:13.791818] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:20.751 [2024-11-10 05:33:13.791825] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:20.751 [2024-11-10 05:33:13.791834] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:20.751 [2024-11-10 05:33:13.791842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:20.751 [2024-11-10 05:33:13.791850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:20.751 [2024-11-10 05:33:13.791857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.641 ms 00:31:20.751 [2024-11-10 05:33:13.791868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.751 [2024-11-10 05:33:13.813171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:20.751 [2024-11-10 05:33:13.813266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:20.751 [2024-11-10 05:33:13.813307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.255 ms 00:31:20.751 [2024-11-10 05:33:13.813329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.751 [2024-11-10 05:33:13.813590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:20.751 [2024-11-10 05:33:13.813638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:20.751 [2024-11-10 05:33:13.813664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.174 ms 00:31:20.751 [2024-11-10 05:33:13.813686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.751 [2024-11-10 05:33:13.826536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:20.751 [2024-11-10 05:33:13.826584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:20.751 [2024-11-10 05:33:13.826601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.697 ms 00:31:20.751 [2024-11-10 05:33:13.826608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.751 [2024-11-10 05:33:13.826644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:20.751 [2024-11-10 05:33:13.826658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:20.751 [2024-11-10 05:33:13.826668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:20.751 [2024-11-10 05:33:13.826676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.751 [2024-11-10 05:33:13.826780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:20.751 [2024-11-10 05:33:13.826792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:20.751 [2024-11-10 05:33:13.826801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:31:20.751 [2024-11-10 05:33:13.826811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.751 [2024-11-10 05:33:13.826933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:20.751 [2024-11-10 05:33:13.826943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:20.751 [2024-11-10 05:33:13.826951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:31:20.751 [2024-11-10 05:33:13.826959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.751 [2024-11-10 05:33:13.833982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:20.751 [2024-11-10 05:33:13.834041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:20.751 [2024-11-10 05:33:13.834052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.002 ms 00:31:20.751 [2024-11-10 05:33:13.834070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.751 [2024-11-10 05:33:13.834185] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:31:20.751 [2024-11-10 05:33:13.834199] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:20.751 [2024-11-10 05:33:13.834210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:20.751 [2024-11-10 05:33:13.834219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:20.751 [2024-11-10 05:33:13.834234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:31:20.751 [2024-11-10 05:33:13.834242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.751 [2024-11-10 05:33:13.846589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:20.751 [2024-11-10 05:33:13.846638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:20.751 [2024-11-10 05:33:13.846650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.328 ms 00:31:20.751 [2024-11-10 05:33:13.846658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.751 [2024-11-10 05:33:13.846789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:20.751 [2024-11-10 05:33:13.846799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:20.751 [2024-11-10 05:33:13.846815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:31:20.751 [2024-11-10 05:33:13.846822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.751 [2024-11-10 05:33:13.846878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:20.751 [2024-11-10 05:33:13.846887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:20.751 [2024-11-10 05:33:13.846901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:31:20.751 [2024-11-10 05:33:13.846911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.751 [2024-11-10 05:33:13.847253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:20.751 [2024-11-10 05:33:13.847266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:20.751 [2024-11-10 05:33:13.847275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:31:20.751 [2024-11-10 05:33:13.847282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.751 [2024-11-10 05:33:13.847299] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:31:20.751 [2024-11-10 05:33:13.847308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:20.751 [2024-11-10 05:33:13.847316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:20.751 [2024-11-10 05:33:13.847324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:31:20.751 [2024-11-10 05:33:13.847340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.751 [2024-11-10 05:33:13.856573] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:20.751 [2024-11-10 05:33:13.856882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:20.751 [2024-11-10 05:33:13.856901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:20.751 [2024-11-10 05:33:13.856911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.523 ms 00:31:20.751 [2024-11-10 05:33:13.856919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.751 [2024-11-10 05:33:13.859540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:20.751 [2024-11-10 05:33:13.859576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:20.751 [2024-11-10 05:33:13.859586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.594 ms 00:31:20.751 [2024-11-10 05:33:13.859594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.751 [2024-11-10 05:33:13.859671] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:31:20.751 [2024-11-10 05:33:13.860436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:20.751 [2024-11-10 05:33:13.860540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:20.751 [2024-11-10 05:33:13.860593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.783 ms 00:31:20.751 [2024-11-10 05:33:13.860616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.751 [2024-11-10 05:33:13.860666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:20.751 [2024-11-10 05:33:13.860689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:20.751 [2024-11-10 05:33:13.861207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:20.751 [2024-11-10 05:33:13.861234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.751 [2024-11-10 05:33:13.861310] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:20.751 [2024-11-10 05:33:13.861322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:20.751 [2024-11-10 05:33:13.861331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:20.751 [2024-11-10 05:33:13.861340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:31:20.751 [2024-11-10 05:33:13.861347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.751 [2024-11-10 05:33:13.867633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:20.751 [2024-11-10 05:33:13.867816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:20.751 [2024-11-10 05:33:13.867835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.245 ms 00:31:20.751 [2024-11-10 05:33:13.867843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.751 [2024-11-10 05:33:13.867928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:20.751 [2024-11-10 05:33:13.867943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:20.751 [2024-11-10 05:33:13.867952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:31:20.751 [2024-11-10 05:33:13.867959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.751 [2024-11-10 05:33:13.869440] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 82.202 ms, result 0 00:31:22.135  [2024-11-10T05:33:16.315Z] Copying: 15/1024 [MB] (15 MBps) [2024-11-10T05:33:17.257Z] Copying: 33/1024 [MB] (18 MBps) [2024-11-10T05:33:18.200Z] Copying: 47/1024 [MB] (13 MBps) [2024-11-10T05:33:19.143Z] Copying: 61/1024 [MB] (14 MBps) [2024-11-10T05:33:20.088Z] Copying: 88/1024 [MB] (26 MBps) [2024-11-10T05:33:21.472Z] Copying: 120/1024 [MB] (31 MBps) [2024-11-10T05:33:22.416Z] Copying: 139/1024 [MB] (19 MBps) [2024-11-10T05:33:23.357Z] Copying: 156/1024 [MB] (16 MBps) [2024-11-10T05:33:24.306Z] Copying: 168/1024 [MB] (11 MBps) [2024-11-10T05:33:25.249Z] Copying: 187/1024 [MB] (19 MBps) [2024-11-10T05:33:26.193Z] Copying: 208/1024 [MB] (20 MBps) [2024-11-10T05:33:27.137Z] Copying: 226/1024 [MB] (17 MBps) [2024-11-10T05:33:28.083Z] Copying: 240/1024 [MB] (13 MBps) [2024-11-10T05:33:29.470Z] Copying: 254/1024 [MB] (14 MBps) [2024-11-10T05:33:30.414Z] Copying: 278/1024 [MB] (23 MBps) [2024-11-10T05:33:31.356Z] Copying: 294/1024 [MB] (16 MBps) [2024-11-10T05:33:32.301Z] Copying: 310/1024 [MB] (15 MBps) [2024-11-10T05:33:33.243Z] Copying: 330/1024 [MB] (20 MBps) [2024-11-10T05:33:34.186Z] Copying: 350/1024 [MB] (19 MBps) [2024-11-10T05:33:35.129Z] Copying: 367/1024 [MB] (16 MBps) [2024-11-10T05:33:36.074Z] Copying: 379/1024 [MB] (12 MBps) [2024-11-10T05:33:37.463Z] Copying: 399/1024 [MB] (20 MBps) [2024-11-10T05:33:38.417Z] Copying: 412/1024 [MB] (12 MBps) [2024-11-10T05:33:39.361Z] Copying: 432/1024 [MB] (19 MBps) [2024-11-10T05:33:40.307Z] Copying: 453/1024 [MB] (21 MBps) [2024-11-10T05:33:41.250Z] Copying: 465/1024 [MB] (11 MBps) [2024-11-10T05:33:42.197Z] Copying: 480/1024 [MB] (14 MBps) [2024-11-10T05:33:43.141Z] Copying: 508/1024 [MB] (28 MBps) [2024-11-10T05:33:44.086Z] Copying: 524/1024 [MB] (16 MBps) [2024-11-10T05:33:45.474Z] Copying: 538/1024 [MB] (13 MBps) [2024-11-10T05:33:46.419Z] Copying: 549/1024 [MB] (11 MBps) [2024-11-10T05:33:47.363Z] Copying: 565/1024 [MB] (15 MBps) [2024-11-10T05:33:48.307Z] Copying: 575/1024 [MB] (10 MBps) [2024-11-10T05:33:49.252Z] Copying: 588/1024 [MB] (12 MBps) [2024-11-10T05:33:50.202Z] Copying: 610/1024 [MB] (22 MBps) [2024-11-10T05:33:51.143Z] Copying: 625/1024 [MB] (14 MBps) [2024-11-10T05:33:52.086Z] Copying: 645/1024 [MB] (19 MBps) [2024-11-10T05:33:53.473Z] Copying: 673/1024 [MB] (28 MBps) [2024-11-10T05:33:54.418Z] Copying: 691/1024 [MB] (18 MBps) [2024-11-10T05:33:55.362Z] Copying: 704/1024 [MB] (13 MBps) [2024-11-10T05:33:56.305Z] Copying: 717/1024 [MB] (12 MBps) [2024-11-10T05:33:57.252Z] Copying: 728/1024 [MB] (11 MBps) [2024-11-10T05:33:58.195Z] Copying: 742/1024 [MB] (14 MBps) [2024-11-10T05:33:59.139Z] Copying: 754/1024 [MB] (11 MBps) [2024-11-10T05:34:00.081Z] Copying: 768/1024 [MB] (13 MBps) [2024-11-10T05:34:01.467Z] Copying: 781/1024 [MB] (12 MBps) [2024-11-10T05:34:02.411Z] Copying: 797/1024 [MB] (16 MBps) [2024-11-10T05:34:03.352Z] Copying: 818/1024 [MB] (21 MBps) [2024-11-10T05:34:04.296Z] Copying: 831/1024 [MB] (12 MBps) [2024-11-10T05:34:05.241Z] Copying: 842/1024 [MB] (11 MBps) [2024-11-10T05:34:06.186Z] Copying: 853/1024 [MB] (11 MBps) [2024-11-10T05:34:07.130Z] Copying: 867/1024 [MB] (13 MBps) [2024-11-10T05:34:08.074Z] Copying: 880/1024 [MB] (12 MBps) [2024-11-10T05:34:09.463Z] Copying: 891/1024 [MB] (10 MBps) [2024-11-10T05:34:10.407Z] Copying: 901/1024 [MB] (10 MBps) [2024-11-10T05:34:11.389Z] Copying: 912/1024 [MB] (10 MBps) [2024-11-10T05:34:12.332Z] Copying: 930/1024 [MB] (17 MBps) [2024-11-10T05:34:13.276Z] Copying: 941/1024 [MB] (10 MBps) [2024-11-10T05:34:14.221Z] Copying: 951/1024 [MB] (10 MBps) [2024-11-10T05:34:15.164Z] Copying: 970/1024 [MB] (18 MBps) [2024-11-10T05:34:16.109Z] Copying: 983/1024 [MB] (12 MBps) [2024-11-10T05:34:17.495Z] Copying: 1001/1024 [MB] (18 MBps) [2024-11-10T05:34:17.756Z] Copying: 1016/1024 [MB] (15 MBps) [2024-11-10T05:34:18.018Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-11-10 05:34:17.875638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.782 [2024-11-10 05:34:17.875754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:24.782 [2024-11-10 05:34:17.875779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:32:24.782 [2024-11-10 05:34:17.875794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.782 [2024-11-10 05:34:17.875832] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:24.782 [2024-11-10 05:34:17.876756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.782 [2024-11-10 05:34:17.876801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:24.783 [2024-11-10 05:34:17.876835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.899 ms 00:32:24.783 [2024-11-10 05:34:17.876850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.783 [2024-11-10 05:34:17.877270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.783 [2024-11-10 05:34:17.877296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:24.783 [2024-11-10 05:34:17.877312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.380 ms 00:32:24.783 [2024-11-10 05:34:17.877325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.783 [2024-11-10 05:34:17.877374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.783 [2024-11-10 05:34:17.877390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:32:24.783 [2024-11-10 05:34:17.877405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:32:24.783 [2024-11-10 05:34:17.877418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.783 [2024-11-10 05:34:17.877514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.783 [2024-11-10 05:34:17.877530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:32:24.783 [2024-11-10 05:34:17.877549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:32:24.783 [2024-11-10 05:34:17.877562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.783 [2024-11-10 05:34:17.877586] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:24.783 [2024-11-10 05:34:17.877607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:32:24.783 [2024-11-10 05:34:17.877624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:24.783 [2024-11-10 05:34:17.877638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:24.783 [2024-11-10 05:34:17.877652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:24.783 [2024-11-10 05:34:17.877666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:24.783 [2024-11-10 05:34:17.877680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:24.783 [2024-11-10 05:34:17.877695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:24.783 [2024-11-10 05:34:17.877708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:24.783 [2024-11-10 05:34:17.877722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:24.783 [2024-11-10 05:34:17.877736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:24.783 [2024-11-10 05:34:17.877749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:24.783 [2024-11-10 05:34:17.877763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:24.783 [2024-11-10 05:34:17.877778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:24.783 [2024-11-10 05:34:17.877792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:24.783 [2024-11-10 05:34:17.877806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:24.783 [2024-11-10 05:34:17.877820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:24.783 [2024-11-10 05:34:17.877833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:24.783 [2024-11-10 05:34:17.877846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:24.783 [2024-11-10 05:34:17.877860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:24.783 [2024-11-10 05:34:17.877874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:24.783 [2024-11-10 05:34:17.877888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:24.783 [2024-11-10 05:34:17.877901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:24.783 [2024-11-10 05:34:17.877915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:24.783 [2024-11-10 05:34:17.877930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:24.783 [2024-11-10 05:34:17.877944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:24.783 [2024-11-10 05:34:17.877958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:24.783 [2024-11-10 05:34:17.877971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:24.783 [2024-11-10 05:34:17.877985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:24.783 [2024-11-10 05:34:17.878022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:24.783 [2024-11-10 05:34:17.878035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:24.783 [2024-11-10 05:34:17.878049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:24.783 [2024-11-10 05:34:17.878063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:24.783 [2024-11-10 05:34:17.878089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:24.783 [2024-11-10 05:34:17.878103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:24.783 [2024-11-10 05:34:17.878116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:24.783 [2024-11-10 05:34:17.878132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:24.783 [2024-11-10 05:34:17.878146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:24.783 [2024-11-10 05:34:17.878160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:24.783 [2024-11-10 05:34:17.878174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:24.783 [2024-11-10 05:34:17.878187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:24.783 [2024-11-10 05:34:17.878200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:24.783 [2024-11-10 05:34:17.878215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:24.783 [2024-11-10 05:34:17.878229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:24.783 [2024-11-10 05:34:17.878243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:24.783 [2024-11-10 05:34:17.878259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:24.783 [2024-11-10 05:34:17.878274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:24.783 [2024-11-10 05:34:17.878288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:24.783 [2024-11-10 05:34:17.878302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:24.783 [2024-11-10 05:34:17.878316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:24.783 [2024-11-10 05:34:17.878330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:24.783 [2024-11-10 05:34:17.878344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:24.783 [2024-11-10 05:34:17.878358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:24.783 [2024-11-10 05:34:17.878372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:24.783 [2024-11-10 05:34:17.878386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:24.783 [2024-11-10 05:34:17.878400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:24.783 [2024-11-10 05:34:17.878416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:24.783 [2024-11-10 05:34:17.878430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:24.783 [2024-11-10 05:34:17.878445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:24.783 [2024-11-10 05:34:17.878458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:24.783 [2024-11-10 05:34:17.878472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:24.783 [2024-11-10 05:34:17.878486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:24.783 [2024-11-10 05:34:17.878500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:24.783 [2024-11-10 05:34:17.878514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:24.783 [2024-11-10 05:34:17.878527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:24.783 [2024-11-10 05:34:17.878540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:24.783 [2024-11-10 05:34:17.878554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:24.783 [2024-11-10 05:34:17.878567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:24.783 [2024-11-10 05:34:17.878581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:24.783 [2024-11-10 05:34:17.878595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:24.783 [2024-11-10 05:34:17.878608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:24.783 [2024-11-10 05:34:17.878622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:24.783 [2024-11-10 05:34:17.878636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:24.783 [2024-11-10 05:34:17.878650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:24.783 [2024-11-10 05:34:17.878664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:24.783 [2024-11-10 05:34:17.878677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:24.783 [2024-11-10 05:34:17.878691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:24.783 [2024-11-10 05:34:17.878705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:24.784 [2024-11-10 05:34:17.878719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:24.784 [2024-11-10 05:34:17.878733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:24.784 [2024-11-10 05:34:17.878746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:24.784 [2024-11-10 05:34:17.878760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:24.784 [2024-11-10 05:34:17.878774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:24.784 [2024-11-10 05:34:17.878788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:24.784 [2024-11-10 05:34:17.878802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:24.784 [2024-11-10 05:34:17.878815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:24.784 [2024-11-10 05:34:17.878829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:24.784 [2024-11-10 05:34:17.878843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:24.784 [2024-11-10 05:34:17.878859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:24.784 [2024-11-10 05:34:17.878872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:24.784 [2024-11-10 05:34:17.878887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:24.784 [2024-11-10 05:34:17.878901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:24.784 [2024-11-10 05:34:17.878914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:24.784 [2024-11-10 05:34:17.878927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:24.784 [2024-11-10 05:34:17.878941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:24.784 [2024-11-10 05:34:17.878955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:24.784 [2024-11-10 05:34:17.878969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:24.784 [2024-11-10 05:34:17.878982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:24.784 [2024-11-10 05:34:17.879013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:24.784 [2024-11-10 05:34:17.879027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:24.784 [2024-11-10 05:34:17.879042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:24.784 [2024-11-10 05:34:17.879071] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:24.784 [2024-11-10 05:34:17.879090] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5d7d5206-51f2-499a-8e45-dc97d339ed69 00:32:24.784 [2024-11-10 05:34:17.879104] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:32:24.784 [2024-11-10 05:34:17.879119] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 2336 00:32:24.784 [2024-11-10 05:34:17.879132] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 2304 00:32:24.784 [2024-11-10 05:34:17.879146] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0139 00:32:24.784 [2024-11-10 05:34:17.879158] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:24.784 [2024-11-10 05:34:17.879177] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:24.784 [2024-11-10 05:34:17.879191] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:24.784 [2024-11-10 05:34:17.879203] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:24.784 [2024-11-10 05:34:17.879214] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:24.784 [2024-11-10 05:34:17.879228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.784 [2024-11-10 05:34:17.879240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:24.784 [2024-11-10 05:34:17.879255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.643 ms 00:32:24.784 [2024-11-10 05:34:17.879274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.784 [2024-11-10 05:34:17.882823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.784 [2024-11-10 05:34:17.882869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:24.784 [2024-11-10 05:34:17.882886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.524 ms 00:32:24.784 [2024-11-10 05:34:17.882909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.784 [2024-11-10 05:34:17.883089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.784 [2024-11-10 05:34:17.883105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:24.784 [2024-11-10 05:34:17.883121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:32:24.784 [2024-11-10 05:34:17.883134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.784 [2024-11-10 05:34:17.890138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:24.784 [2024-11-10 05:34:17.890303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:24.784 [2024-11-10 05:34:17.890370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:24.784 [2024-11-10 05:34:17.890394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.784 [2024-11-10 05:34:17.890480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:24.784 [2024-11-10 05:34:17.890501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:24.784 [2024-11-10 05:34:17.890521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:24.784 [2024-11-10 05:34:17.890540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.784 [2024-11-10 05:34:17.890679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:24.784 [2024-11-10 05:34:17.890708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:24.784 [2024-11-10 05:34:17.890729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:24.784 [2024-11-10 05:34:17.890754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.784 [2024-11-10 05:34:17.890783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:24.784 [2024-11-10 05:34:17.890863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:24.784 [2024-11-10 05:34:17.890887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:24.784 [2024-11-10 05:34:17.890906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.784 [2024-11-10 05:34:17.903935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:24.784 [2024-11-10 05:34:17.904141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:24.784 [2024-11-10 05:34:17.904207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:24.784 [2024-11-10 05:34:17.904231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.784 [2024-11-10 05:34:17.915215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:24.784 [2024-11-10 05:34:17.915381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:24.784 [2024-11-10 05:34:17.915437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:24.784 [2024-11-10 05:34:17.915460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.784 [2024-11-10 05:34:17.915533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:24.784 [2024-11-10 05:34:17.915556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:24.784 [2024-11-10 05:34:17.915576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:24.784 [2024-11-10 05:34:17.915596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.784 [2024-11-10 05:34:17.915648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:24.784 [2024-11-10 05:34:17.915670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:24.784 [2024-11-10 05:34:17.915691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:24.784 [2024-11-10 05:34:17.915744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.784 [2024-11-10 05:34:17.915827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:24.784 [2024-11-10 05:34:17.915852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:24.784 [2024-11-10 05:34:17.915939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:24.784 [2024-11-10 05:34:17.916200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.784 [2024-11-10 05:34:17.916241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:24.784 [2024-11-10 05:34:17.916259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:24.784 [2024-11-10 05:34:17.916269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:24.784 [2024-11-10 05:34:17.916278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.784 [2024-11-10 05:34:17.916318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:24.784 [2024-11-10 05:34:17.916327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:24.784 [2024-11-10 05:34:17.916340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:24.784 [2024-11-10 05:34:17.916348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.784 [2024-11-10 05:34:17.916398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:24.784 [2024-11-10 05:34:17.916408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:24.784 [2024-11-10 05:34:17.916417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:24.784 [2024-11-10 05:34:17.916425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.784 [2024-11-10 05:34:17.916559] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 40.897 ms, result 0 00:32:25.046 00:32:25.046 00:32:25.046 05:34:18 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:32:27.590 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:32:27.590 05:34:20 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:32:27.590 05:34:20 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:32:27.590 05:34:20 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:32:27.590 05:34:20 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:32:27.590 05:34:20 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:32:27.590 05:34:20 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 92829 00:32:27.590 05:34:20 ftl.ftl_restore_fast -- common/autotest_common.sh@950 -- # '[' -z 92829 ']' 00:32:27.590 05:34:20 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # kill -0 92829 00:32:27.590 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (92829) - No such process 00:32:27.590 Process with pid 92829 is not found 00:32:27.590 05:34:20 ftl.ftl_restore_fast -- common/autotest_common.sh@977 -- # echo 'Process with pid 92829 is not found' 00:32:27.590 05:34:20 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:32:27.590 Remove shared memory files 00:32:27.590 05:34:20 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:32:27.590 05:34:20 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:32:27.590 05:34:20 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_5d7d5206-51f2-499a-8e45-dc97d339ed69_band_md /dev/hugepages/ftl_5d7d5206-51f2-499a-8e45-dc97d339ed69_l2p_l1 /dev/hugepages/ftl_5d7d5206-51f2-499a-8e45-dc97d339ed69_l2p_l2 /dev/hugepages/ftl_5d7d5206-51f2-499a-8e45-dc97d339ed69_l2p_l2_ctx /dev/hugepages/ftl_5d7d5206-51f2-499a-8e45-dc97d339ed69_nvc_md /dev/hugepages/ftl_5d7d5206-51f2-499a-8e45-dc97d339ed69_p2l_pool /dev/hugepages/ftl_5d7d5206-51f2-499a-8e45-dc97d339ed69_sb /dev/hugepages/ftl_5d7d5206-51f2-499a-8e45-dc97d339ed69_sb_shm /dev/hugepages/ftl_5d7d5206-51f2-499a-8e45-dc97d339ed69_trim_bitmap /dev/hugepages/ftl_5d7d5206-51f2-499a-8e45-dc97d339ed69_trim_log /dev/hugepages/ftl_5d7d5206-51f2-499a-8e45-dc97d339ed69_trim_md /dev/hugepages/ftl_5d7d5206-51f2-499a-8e45-dc97d339ed69_vmap 00:32:27.590 05:34:20 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:32:27.590 05:34:20 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:32:27.590 05:34:20 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:32:27.590 ************************************ 00:32:27.590 END TEST ftl_restore_fast 00:32:27.590 ************************************ 00:32:27.590 00:32:27.590 real 4m27.956s 00:32:27.590 user 4m14.580s 00:32:27.590 sys 0m12.985s 00:32:27.590 05:34:20 ftl.ftl_restore_fast -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:27.590 05:34:20 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:32:27.590 Process with pid 84112 is not found 00:32:27.590 05:34:20 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:32:27.590 05:34:20 ftl -- ftl/ftl.sh@14 -- # killprocess 84112 00:32:27.591 05:34:20 ftl -- common/autotest_common.sh@950 -- # '[' -z 84112 ']' 00:32:27.591 05:34:20 ftl -- common/autotest_common.sh@954 -- # kill -0 84112 00:32:27.591 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (84112) - No such process 00:32:27.591 05:34:20 ftl -- common/autotest_common.sh@977 -- # echo 'Process with pid 84112 is not found' 00:32:27.591 05:34:20 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:32:27.591 05:34:20 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=95554 00:32:27.591 05:34:20 ftl -- ftl/ftl.sh@20 -- # waitforlisten 95554 00:32:27.591 05:34:20 ftl -- common/autotest_common.sh@831 -- # '[' -z 95554 ']' 00:32:27.591 05:34:20 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:27.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:27.591 05:34:20 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:27.591 05:34:20 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:27.591 05:34:20 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:27.591 05:34:20 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:27.591 05:34:20 ftl -- common/autotest_common.sh@10 -- # set +x 00:32:27.591 [2024-11-10 05:34:20.660688] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:32:27.591 [2024-11-10 05:34:20.660835] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95554 ] 00:32:27.591 [2024-11-10 05:34:20.813175] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:27.852 [2024-11-10 05:34:20.863363] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:32:28.424 05:34:21 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:28.424 05:34:21 ftl -- common/autotest_common.sh@864 -- # return 0 00:32:28.424 05:34:21 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:32:28.685 nvme0n1 00:32:28.685 05:34:21 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:32:28.685 05:34:21 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:28.685 05:34:21 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:32:28.946 05:34:22 ftl -- ftl/common.sh@28 -- # stores=0b627628-32c8-4dc7-9726-c2fd0d2c3421 00:32:28.946 05:34:22 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:32:28.946 05:34:22 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0b627628-32c8-4dc7-9726-c2fd0d2c3421 00:32:29.208 05:34:22 ftl -- ftl/ftl.sh@23 -- # killprocess 95554 00:32:29.208 05:34:22 ftl -- common/autotest_common.sh@950 -- # '[' -z 95554 ']' 00:32:29.208 05:34:22 ftl -- common/autotest_common.sh@954 -- # kill -0 95554 00:32:29.208 05:34:22 ftl -- common/autotest_common.sh@955 -- # uname 00:32:29.208 05:34:22 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:29.208 05:34:22 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95554 00:32:29.208 killing process with pid 95554 00:32:29.208 05:34:22 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:29.208 05:34:22 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:29.208 05:34:22 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95554' 00:32:29.208 05:34:22 ftl -- common/autotest_common.sh@969 -- # kill 95554 00:32:29.208 05:34:22 ftl -- common/autotest_common.sh@974 -- # wait 95554 00:32:29.468 05:34:22 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:32:29.730 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:29.730 Waiting for block devices as requested 00:32:29.730 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:32:29.990 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:32:29.990 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:32:29.990 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:32:35.303 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:32:35.303 05:34:28 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:32:35.303 Remove shared memory files 00:32:35.303 05:34:28 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:32:35.303 05:34:28 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:32:35.303 05:34:28 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:32:35.303 05:34:28 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:32:35.303 05:34:28 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:32:35.303 05:34:28 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:32:35.303 ************************************ 00:32:35.303 END TEST ftl 00:32:35.303 ************************************ 00:32:35.303 00:32:35.303 real 17m15.018s 00:32:35.303 user 19m19.236s 00:32:35.303 sys 1m26.930s 00:32:35.303 05:34:28 ftl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:35.303 05:34:28 ftl -- common/autotest_common.sh@10 -- # set +x 00:32:35.303 05:34:28 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:32:35.303 05:34:28 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:32:35.303 05:34:28 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:32:35.303 05:34:28 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:32:35.303 05:34:28 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:32:35.303 05:34:28 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:32:35.303 05:34:28 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:32:35.303 05:34:28 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:32:35.303 05:34:28 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:32:35.303 05:34:28 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:32:35.303 05:34:28 -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:35.303 05:34:28 -- common/autotest_common.sh@10 -- # set +x 00:32:35.303 05:34:28 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:32:35.303 05:34:28 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:32:35.303 05:34:28 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:32:35.303 05:34:28 -- common/autotest_common.sh@10 -- # set +x 00:32:36.687 INFO: APP EXITING 00:32:36.687 INFO: killing all VMs 00:32:36.687 INFO: killing vhost app 00:32:36.687 INFO: EXIT DONE 00:32:36.948 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:37.209 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:32:37.209 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:32:37.209 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:32:37.209 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:32:37.780 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:38.042 Cleaning 00:32:38.042 Removing: /var/run/dpdk/spdk0/config 00:32:38.042 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:32:38.042 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:32:38.042 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:32:38.042 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:32:38.042 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:32:38.042 Removing: /var/run/dpdk/spdk0/hugepage_info 00:32:38.042 Removing: /var/run/dpdk/spdk0 00:32:38.042 Removing: /var/run/dpdk/spdk_pid69641 00:32:38.042 Removing: /var/run/dpdk/spdk_pid69799 00:32:38.042 Removing: /var/run/dpdk/spdk_pid69995 00:32:38.042 Removing: /var/run/dpdk/spdk_pid70077 00:32:38.042 Removing: /var/run/dpdk/spdk_pid70106 00:32:38.042 Removing: /var/run/dpdk/spdk_pid70217 00:32:38.042 Removing: /var/run/dpdk/spdk_pid70235 00:32:38.042 Removing: /var/run/dpdk/spdk_pid70412 00:32:38.042 Removing: /var/run/dpdk/spdk_pid70486 00:32:38.042 Removing: /var/run/dpdk/spdk_pid70565 00:32:38.042 Removing: /var/run/dpdk/spdk_pid70660 00:32:38.042 Removing: /var/run/dpdk/spdk_pid70740 00:32:38.042 Removing: /var/run/dpdk/spdk_pid70780 00:32:38.042 Removing: /var/run/dpdk/spdk_pid70811 00:32:38.042 Removing: /var/run/dpdk/spdk_pid70881 00:32:38.042 Removing: /var/run/dpdk/spdk_pid70982 00:32:38.042 Removing: /var/run/dpdk/spdk_pid71396 00:32:38.042 Removing: /var/run/dpdk/spdk_pid71449 00:32:38.042 Removing: /var/run/dpdk/spdk_pid71490 00:32:38.042 Removing: /var/run/dpdk/spdk_pid71506 00:32:38.042 Removing: /var/run/dpdk/spdk_pid71564 00:32:38.042 Removing: /var/run/dpdk/spdk_pid71580 00:32:38.042 Removing: /var/run/dpdk/spdk_pid71638 00:32:38.042 Removing: /var/run/dpdk/spdk_pid71654 00:32:38.042 Removing: /var/run/dpdk/spdk_pid71696 00:32:38.042 Removing: /var/run/dpdk/spdk_pid71714 00:32:38.042 Removing: /var/run/dpdk/spdk_pid71756 00:32:38.042 Removing: /var/run/dpdk/spdk_pid71774 00:32:38.042 Removing: /var/run/dpdk/spdk_pid71901 00:32:38.042 Removing: /var/run/dpdk/spdk_pid71932 00:32:38.042 Removing: /var/run/dpdk/spdk_pid72016 00:32:38.042 Removing: /var/run/dpdk/spdk_pid72177 00:32:38.042 Removing: /var/run/dpdk/spdk_pid72244 00:32:38.042 Removing: /var/run/dpdk/spdk_pid72275 00:32:38.042 Removing: /var/run/dpdk/spdk_pid72684 00:32:38.042 Removing: /var/run/dpdk/spdk_pid72777 00:32:38.042 Removing: /var/run/dpdk/spdk_pid72875 00:32:38.042 Removing: /var/run/dpdk/spdk_pid72917 00:32:38.042 Removing: /var/run/dpdk/spdk_pid72937 00:32:38.042 Removing: /var/run/dpdk/spdk_pid73010 00:32:38.042 Removing: /var/run/dpdk/spdk_pid73662 00:32:38.042 Removing: /var/run/dpdk/spdk_pid73692 00:32:38.042 Removing: /var/run/dpdk/spdk_pid74136 00:32:38.042 Removing: /var/run/dpdk/spdk_pid74228 00:32:38.042 Removing: /var/run/dpdk/spdk_pid74332 00:32:38.304 Removing: /var/run/dpdk/spdk_pid74369 00:32:38.304 Removing: /var/run/dpdk/spdk_pid74394 00:32:38.304 Removing: /var/run/dpdk/spdk_pid74414 00:32:38.304 Removing: /var/run/dpdk/spdk_pid76228 00:32:38.304 Removing: /var/run/dpdk/spdk_pid76343 00:32:38.304 Removing: /var/run/dpdk/spdk_pid76354 00:32:38.304 Removing: /var/run/dpdk/spdk_pid76370 00:32:38.304 Removing: /var/run/dpdk/spdk_pid76409 00:32:38.304 Removing: /var/run/dpdk/spdk_pid76413 00:32:38.304 Removing: /var/run/dpdk/spdk_pid76425 00:32:38.304 Removing: /var/run/dpdk/spdk_pid76464 00:32:38.304 Removing: /var/run/dpdk/spdk_pid76468 00:32:38.304 Removing: /var/run/dpdk/spdk_pid76480 00:32:38.304 Removing: /var/run/dpdk/spdk_pid76525 00:32:38.304 Removing: /var/run/dpdk/spdk_pid76529 00:32:38.304 Removing: /var/run/dpdk/spdk_pid76541 00:32:38.304 Removing: /var/run/dpdk/spdk_pid77912 00:32:38.304 Removing: /var/run/dpdk/spdk_pid77998 00:32:38.304 Removing: /var/run/dpdk/spdk_pid79395 00:32:38.304 Removing: /var/run/dpdk/spdk_pid80750 00:32:38.304 Removing: /var/run/dpdk/spdk_pid80810 00:32:38.304 Removing: /var/run/dpdk/spdk_pid80864 00:32:38.304 Removing: /var/run/dpdk/spdk_pid80926 00:32:38.304 Removing: /var/run/dpdk/spdk_pid80998 00:32:38.304 Removing: /var/run/dpdk/spdk_pid81071 00:32:38.304 Removing: /var/run/dpdk/spdk_pid81203 00:32:38.304 Removing: /var/run/dpdk/spdk_pid81550 00:32:38.304 Removing: /var/run/dpdk/spdk_pid81576 00:32:38.304 Removing: /var/run/dpdk/spdk_pid82011 00:32:38.304 Removing: /var/run/dpdk/spdk_pid82184 00:32:38.304 Removing: /var/run/dpdk/spdk_pid82277 00:32:38.304 Removing: /var/run/dpdk/spdk_pid82378 00:32:38.304 Removing: /var/run/dpdk/spdk_pid82413 00:32:38.304 Removing: /var/run/dpdk/spdk_pid82440 00:32:38.304 Removing: /var/run/dpdk/spdk_pid82723 00:32:38.304 Removing: /var/run/dpdk/spdk_pid82762 00:32:38.304 Removing: /var/run/dpdk/spdk_pid82813 00:32:38.304 Removing: /var/run/dpdk/spdk_pid83180 00:32:38.304 Removing: /var/run/dpdk/spdk_pid83323 00:32:38.304 Removing: /var/run/dpdk/spdk_pid84112 00:32:38.304 Removing: /var/run/dpdk/spdk_pid84233 00:32:38.304 Removing: /var/run/dpdk/spdk_pid84388 00:32:38.304 Removing: /var/run/dpdk/spdk_pid84463 00:32:38.304 Removing: /var/run/dpdk/spdk_pid84738 00:32:38.304 Removing: /var/run/dpdk/spdk_pid84975 00:32:38.304 Removing: /var/run/dpdk/spdk_pid85314 00:32:38.304 Removing: /var/run/dpdk/spdk_pid85480 00:32:38.304 Removing: /var/run/dpdk/spdk_pid85665 00:32:38.304 Removing: /var/run/dpdk/spdk_pid85707 00:32:38.304 Removing: /var/run/dpdk/spdk_pid85855 00:32:38.304 Removing: /var/run/dpdk/spdk_pid85868 00:32:38.304 Removing: /var/run/dpdk/spdk_pid85904 00:32:38.304 Removing: /var/run/dpdk/spdk_pid86173 00:32:38.304 Removing: /var/run/dpdk/spdk_pid86384 00:32:38.304 Removing: /var/run/dpdk/spdk_pid86928 00:32:38.304 Removing: /var/run/dpdk/spdk_pid87584 00:32:38.304 Removing: /var/run/dpdk/spdk_pid88263 00:32:38.304 Removing: /var/run/dpdk/spdk_pid89066 00:32:38.304 Removing: /var/run/dpdk/spdk_pid89203 00:32:38.304 Removing: /var/run/dpdk/spdk_pid89286 00:32:38.304 Removing: /var/run/dpdk/spdk_pid89939 00:32:38.304 Removing: /var/run/dpdk/spdk_pid89994 00:32:38.304 Removing: /var/run/dpdk/spdk_pid90700 00:32:38.304 Removing: /var/run/dpdk/spdk_pid91131 00:32:38.304 Removing: /var/run/dpdk/spdk_pid91908 00:32:38.304 Removing: /var/run/dpdk/spdk_pid92030 00:32:38.304 Removing: /var/run/dpdk/spdk_pid92061 00:32:38.304 Removing: /var/run/dpdk/spdk_pid92119 00:32:38.304 Removing: /var/run/dpdk/spdk_pid92169 00:32:38.304 Removing: /var/run/dpdk/spdk_pid92229 00:32:38.304 Removing: /var/run/dpdk/spdk_pid92405 00:32:38.304 Removing: /var/run/dpdk/spdk_pid92475 00:32:38.304 Removing: /var/run/dpdk/spdk_pid92536 00:32:38.304 Removing: /var/run/dpdk/spdk_pid92596 00:32:38.304 Removing: /var/run/dpdk/spdk_pid92626 00:32:38.304 Removing: /var/run/dpdk/spdk_pid92693 00:32:38.304 Removing: /var/run/dpdk/spdk_pid92829 00:32:38.304 Removing: /var/run/dpdk/spdk_pid93037 00:32:38.304 Removing: /var/run/dpdk/spdk_pid93575 00:32:38.304 Removing: /var/run/dpdk/spdk_pid94284 00:32:38.304 Removing: /var/run/dpdk/spdk_pid94863 00:32:38.304 Removing: /var/run/dpdk/spdk_pid95554 00:32:38.304 Clean 00:32:38.566 05:34:31 -- common/autotest_common.sh@1451 -- # return 0 00:32:38.566 05:34:31 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:32:38.566 05:34:31 -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:38.566 05:34:31 -- common/autotest_common.sh@10 -- # set +x 00:32:38.566 05:34:31 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:32:38.566 05:34:31 -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:38.566 05:34:31 -- common/autotest_common.sh@10 -- # set +x 00:32:38.566 05:34:31 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:32:38.566 05:34:31 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:32:38.566 05:34:31 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:32:38.566 05:34:31 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:32:38.566 05:34:31 -- spdk/autotest.sh@394 -- # hostname 00:32:38.566 05:34:31 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:32:38.828 geninfo: WARNING: invalid characters removed from testname! 00:33:05.411 05:34:56 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:07.325 05:35:00 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:09.873 05:35:02 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:12.432 05:35:05 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:14.347 05:35:07 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:16.260 05:35:09 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:18.176 05:35:10 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:33:18.176 05:35:10 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:33:18.176 05:35:10 -- common/autotest_common.sh@1681 -- $ lcov --version 00:33:18.176 05:35:10 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:33:18.176 05:35:11 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:33:18.176 05:35:11 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:33:18.176 05:35:11 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:33:18.176 05:35:11 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:33:18.176 05:35:11 -- scripts/common.sh@336 -- $ IFS=.-: 00:33:18.176 05:35:11 -- scripts/common.sh@336 -- $ read -ra ver1 00:33:18.176 05:35:11 -- scripts/common.sh@337 -- $ IFS=.-: 00:33:18.176 05:35:11 -- scripts/common.sh@337 -- $ read -ra ver2 00:33:18.176 05:35:11 -- scripts/common.sh@338 -- $ local 'op=<' 00:33:18.176 05:35:11 -- scripts/common.sh@340 -- $ ver1_l=2 00:33:18.176 05:35:11 -- scripts/common.sh@341 -- $ ver2_l=1 00:33:18.176 05:35:11 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:33:18.176 05:35:11 -- scripts/common.sh@344 -- $ case "$op" in 00:33:18.176 05:35:11 -- scripts/common.sh@345 -- $ : 1 00:33:18.176 05:35:11 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:33:18.176 05:35:11 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:18.176 05:35:11 -- scripts/common.sh@365 -- $ decimal 1 00:33:18.176 05:35:11 -- scripts/common.sh@353 -- $ local d=1 00:33:18.176 05:35:11 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:33:18.176 05:35:11 -- scripts/common.sh@355 -- $ echo 1 00:33:18.176 05:35:11 -- scripts/common.sh@365 -- $ ver1[v]=1 00:33:18.176 05:35:11 -- scripts/common.sh@366 -- $ decimal 2 00:33:18.176 05:35:11 -- scripts/common.sh@353 -- $ local d=2 00:33:18.176 05:35:11 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:33:18.176 05:35:11 -- scripts/common.sh@355 -- $ echo 2 00:33:18.176 05:35:11 -- scripts/common.sh@366 -- $ ver2[v]=2 00:33:18.176 05:35:11 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:33:18.176 05:35:11 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:33:18.176 05:35:11 -- scripts/common.sh@368 -- $ return 0 00:33:18.176 05:35:11 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:18.176 05:35:11 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:33:18.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:18.176 --rc genhtml_branch_coverage=1 00:33:18.176 --rc genhtml_function_coverage=1 00:33:18.176 --rc genhtml_legend=1 00:33:18.176 --rc geninfo_all_blocks=1 00:33:18.176 --rc geninfo_unexecuted_blocks=1 00:33:18.176 00:33:18.176 ' 00:33:18.176 05:35:11 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:33:18.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:18.176 --rc genhtml_branch_coverage=1 00:33:18.176 --rc genhtml_function_coverage=1 00:33:18.176 --rc genhtml_legend=1 00:33:18.176 --rc geninfo_all_blocks=1 00:33:18.176 --rc geninfo_unexecuted_blocks=1 00:33:18.176 00:33:18.176 ' 00:33:18.176 05:35:11 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:33:18.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:18.176 --rc genhtml_branch_coverage=1 00:33:18.176 --rc genhtml_function_coverage=1 00:33:18.176 --rc genhtml_legend=1 00:33:18.176 --rc geninfo_all_blocks=1 00:33:18.176 --rc geninfo_unexecuted_blocks=1 00:33:18.176 00:33:18.176 ' 00:33:18.176 05:35:11 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:33:18.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:18.176 --rc genhtml_branch_coverage=1 00:33:18.176 --rc genhtml_function_coverage=1 00:33:18.176 --rc genhtml_legend=1 00:33:18.176 --rc geninfo_all_blocks=1 00:33:18.176 --rc geninfo_unexecuted_blocks=1 00:33:18.176 00:33:18.176 ' 00:33:18.176 05:35:11 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:18.176 05:35:11 -- scripts/common.sh@15 -- $ shopt -s extglob 00:33:18.176 05:35:11 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:33:18.176 05:35:11 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:18.176 05:35:11 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:18.176 05:35:11 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:18.176 05:35:11 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:18.176 05:35:11 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:18.176 05:35:11 -- paths/export.sh@5 -- $ export PATH 00:33:18.176 05:35:11 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:18.176 05:35:11 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:33:18.176 05:35:11 -- common/autobuild_common.sh@479 -- $ date +%s 00:33:18.176 05:35:11 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1731216911.XXXXXX 00:33:18.176 05:35:11 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1731216911.ySJgPH 00:33:18.176 05:35:11 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:33:18.176 05:35:11 -- common/autobuild_common.sh@485 -- $ '[' -n v23.11 ']' 00:33:18.176 05:35:11 -- common/autobuild_common.sh@486 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:33:18.176 05:35:11 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:33:18.176 05:35:11 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:33:18.176 05:35:11 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:33:18.176 05:35:11 -- common/autobuild_common.sh@495 -- $ get_config_params 00:33:18.176 05:35:11 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:33:18.177 05:35:11 -- common/autotest_common.sh@10 -- $ set +x 00:33:18.177 05:35:11 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:33:18.177 05:35:11 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:33:18.177 05:35:11 -- pm/common@17 -- $ local monitor 00:33:18.177 05:35:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:18.177 05:35:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:18.177 05:35:11 -- pm/common@25 -- $ sleep 1 00:33:18.177 05:35:11 -- pm/common@21 -- $ date +%s 00:33:18.177 05:35:11 -- pm/common@21 -- $ date +%s 00:33:18.177 05:35:11 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1731216911 00:33:18.177 05:35:11 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1731216911 00:33:18.177 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1731216911_collect-cpu-load.pm.log 00:33:18.177 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1731216911_collect-vmstat.pm.log 00:33:19.120 05:35:12 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:33:19.120 05:35:12 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:33:19.120 05:35:12 -- spdk/autopackage.sh@14 -- $ timing_finish 00:33:19.120 05:35:12 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:33:19.120 05:35:12 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:33:19.120 05:35:12 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:33:19.120 05:35:12 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:33:19.120 05:35:12 -- pm/common@29 -- $ signal_monitor_resources TERM 00:33:19.120 05:35:12 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:33:19.120 05:35:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:19.120 05:35:12 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:33:19.120 05:35:12 -- pm/common@44 -- $ pid=97253 00:33:19.120 05:35:12 -- pm/common@50 -- $ kill -TERM 97253 00:33:19.120 05:35:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:19.120 05:35:12 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:33:19.120 05:35:12 -- pm/common@44 -- $ pid=97255 00:33:19.120 05:35:12 -- pm/common@50 -- $ kill -TERM 97255 00:33:19.120 + [[ -n 5763 ]] 00:33:19.120 + sudo kill 5763 00:33:19.131 [Pipeline] } 00:33:19.147 [Pipeline] // timeout 00:33:19.153 [Pipeline] } 00:33:19.168 [Pipeline] // stage 00:33:19.173 [Pipeline] } 00:33:19.188 [Pipeline] // catchError 00:33:19.197 [Pipeline] stage 00:33:19.199 [Pipeline] { (Stop VM) 00:33:19.212 [Pipeline] sh 00:33:19.498 + vagrant halt 00:33:22.042 ==> default: Halting domain... 00:33:28.645 [Pipeline] sh 00:33:28.957 + vagrant destroy -f 00:33:31.505 ==> default: Removing domain... 00:33:32.461 [Pipeline] sh 00:33:32.744 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:33:32.755 [Pipeline] } 00:33:32.769 [Pipeline] // stage 00:33:32.773 [Pipeline] } 00:33:32.787 [Pipeline] // dir 00:33:32.792 [Pipeline] } 00:33:32.807 [Pipeline] // wrap 00:33:32.813 [Pipeline] } 00:33:32.826 [Pipeline] // catchError 00:33:32.836 [Pipeline] stage 00:33:32.838 [Pipeline] { (Epilogue) 00:33:32.852 [Pipeline] sh 00:33:33.155 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:33:38.454 [Pipeline] catchError 00:33:38.456 [Pipeline] { 00:33:38.470 [Pipeline] sh 00:33:38.760 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:33:38.761 Artifacts sizes are good 00:33:38.771 [Pipeline] } 00:33:38.786 [Pipeline] // catchError 00:33:38.796 [Pipeline] archiveArtifacts 00:33:38.804 Archiving artifacts 00:33:38.926 [Pipeline] cleanWs 00:33:38.940 [WS-CLEANUP] Deleting project workspace... 00:33:38.940 [WS-CLEANUP] Deferred wipeout is used... 00:33:38.947 [WS-CLEANUP] done 00:33:38.949 [Pipeline] } 00:33:38.965 [Pipeline] // stage 00:33:38.970 [Pipeline] } 00:33:38.984 [Pipeline] // node 00:33:38.989 [Pipeline] End of Pipeline 00:33:39.027 Finished: SUCCESS